Skip to content

PyAgenity Graph Module - Core Workflow Engine.

This module provides the foundational components for building and executing agent workflows in PyAgenity. It implements a graph-based execution model similar to LangGraph, where workflows are defined as directed graphs of interconnected nodes that process state and execute business logic.

Architecture Overview:

The graph module follows a builder pattern for workflow construction and provides a compiled execution environment for runtime performance. The core components work together to enable complex, stateful agent interactions:

  1. StateGraph: The primary builder class for constructing workflows
  2. Node: Executable units that encapsulate functions or tool operations
  3. Edge: Connections between nodes that define execution flow
  4. CompiledGraph: The executable runtime form of a constructed graph
  5. ToolNode: Specialized node for managing and executing tools

Core Components:

StateGraph

The main entry point for building workflows. Provides a fluent API for adding nodes, connecting them with edges, and configuring execution behavior. Supports both static and conditional routing between nodes.

Node

Represents an executable unit within the graph. Wraps functions or ToolNode instances and handles dependency injection, parameter mapping, and execution context. Supports both regular and streaming execution modes.

Edge

Defines connections between nodes, supporting both static (always followed) and conditional (state-dependent) routing. Enables complex branching logic and decision trees within workflows.

CompiledGraph

The executable runtime form created by compiling a StateGraph. Provides synchronous and asynchronous execution methods, state persistence, event publishing, and comprehensive error handling.

ToolNode

A specialized registry and executor for callable functions from various sources including local functions, MCP tools, Composio integrations, and LangChain tools. Supports automatic schema generation and unified tool execution.

Key Features:

  • State Management: Persistent, typed state that flows between nodes
  • Dependency Injection: Automatic injection of framework services
  • Event Publishing: Comprehensive execution monitoring and debugging
  • Streaming Support: Real-time incremental result processing
  • Interrupts & Resume: Pauseable execution with checkpointing
  • Tool Integration: Unified interface for various tool providers
  • Type Safety: Generic typing for custom state classes
  • Error Handling: Robust error recovery and callback mechanisms

Usage Example:

```python
from pyagenity.graph import StateGraph, ToolNode
from pyagenity.utils import START, END


# Define workflow functions
def process_input(state, config):
    # Process user input
    result = analyze_input(state.context[-1].content)
    return [Message.text_message(f"Analysis: {result}")]


def generate_response(state, config):
    # Generate final response
    response = create_response(state.context)
    return [Message.text_message(response)]


# Create tools
def search_tool(query: str) -> str:
    return f"Search results for: {query}"


tools = ToolNode([search_tool])

# Build the graph
graph = StateGraph()
graph.add_node("process", process_input)
graph.add_node("search", tools)
graph.add_node("respond", generate_response)

# Define flow
graph.add_edge(START, "process")
graph.add_edge("process", "search")
graph.add_edge("search", "respond")
graph.add_edge("respond", END)

# Compile and execute
compiled = graph.compile()
result = compiled.invoke({"messages": [Message.text_message("Hello, world!")]})

# Cleanup
await compiled.aclose()
```

Integration Points:

The graph module integrates with other PyAgenity components:

  • State Module: Provides AgentState and context management
  • Utils Module: Supplies constants, messages, and helper functions
  • Checkpointer Module: Enables state persistence and recovery
  • Publisher Module: Handles event publishing and monitoring
  • Adapters Module: Connects with external tools and services

This architecture provides a flexible, extensible foundation for building sophisticated agent workflows while maintaining simplicity for common use cases.

Modules:

Name Description
compiled_graph
edge

Graph edge representation and routing logic for PyAgenity workflows.

node

Node execution and management for PyAgenity graph workflows.

state_graph
tool_node

ToolNode package.

utils

Classes:

Name Description
CompiledGraph

A fully compiled and executable graph ready for workflow execution.

Edge

Represents a connection between two nodes in a graph workflow.

Node

Represents a node in the graph workflow.

StateGraph

Main graph class for orchestrating multi-agent workflows.

ToolNode

A unified registry and executor for callable functions from various tool providers.

Attributes

__all__ module-attribute

__all__ = ['CompiledGraph', 'Edge', 'Node', 'StateGraph', 'ToolNode']

Classes

CompiledGraph

A fully compiled and executable graph ready for workflow execution.

CompiledGraph represents the final executable form of a StateGraph after compilation. It encapsulates all the execution logic, handlers, and services needed to run agent workflows. The graph supports both synchronous and asynchronous execution with comprehensive state management, checkpointing, event publishing, and streaming capabilities.

This class is generic over state types to support custom AgentState subclasses, ensuring type safety throughout the execution process.

Key Features: - Synchronous and asynchronous execution methods - Real-time streaming with incremental results - State persistence and checkpointing - Interrupt and resume capabilities - Event publishing for monitoring and debugging - Background task management - Graceful error handling and recovery

Attributes:

Name Type Description
_state

The initial/template state for graph executions.

_invoke_handler InvokeHandler[StateT]

Handler for non-streaming graph execution.

_stream_handler StreamHandler[StateT]

Handler for streaming graph execution.

_checkpointer BaseCheckpointer[StateT] | None

Optional state persistence backend.

_publisher BasePublisher | None

Optional event publishing backend.

_store BaseStore | None

Optional data storage backend.

_state_graph StateGraph[StateT]

Reference to the source StateGraph.

_interrupt_before list[str]

Nodes where execution should pause before execution.

_interrupt_after list[str]

Nodes where execution should pause after execution.

_task_manager

Manager for background async tasks.

Example
# After building and compiling a StateGraph
compiled = graph.compile()

# Synchronous execution
result = compiled.invoke({"messages": [Message.text_message("Hello")]})

# Asynchronous execution with streaming
async for chunk in compiled.astream({"messages": [message]}):
    print(f"Streamed: {chunk.content}")

# Graceful cleanup
await compiled.aclose()
Note

CompiledGraph instances should be properly closed using aclose() to release resources like database connections, background tasks, and event publishers.

Methods:

Name Description
__init__
aclose

Close the graph and release any resources.

ainvoke

Execute the graph asynchronously.

astop

Request the current graph execution to stop (async).

astream

Execute the graph asynchronously with streaming support.

generate_graph

Generate the graph representation.

invoke

Execute the graph synchronously and return the final results.

stop

Request the current graph execution to stop (sync helper).

stream

Execute the graph synchronously with streaming support.

Source code in pyagenity/graph/compiled_graph.py
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
class CompiledGraph[StateT: AgentState]:
    """A fully compiled and executable graph ready for workflow execution.

    CompiledGraph represents the final executable form of a StateGraph after compilation.
    It encapsulates all the execution logic, handlers, and services needed to run
    agent workflows. The graph supports both synchronous and asynchronous execution
    with comprehensive state management, checkpointing, event publishing, and
    streaming capabilities.

    This class is generic over state types to support custom AgentState subclasses,
    ensuring type safety throughout the execution process.

    Key Features:
    - Synchronous and asynchronous execution methods
    - Real-time streaming with incremental results
    - State persistence and checkpointing
    - Interrupt and resume capabilities
    - Event publishing for monitoring and debugging
    - Background task management
    - Graceful error handling and recovery

    Attributes:
        _state: The initial/template state for graph executions.
        _invoke_handler: Handler for non-streaming graph execution.
        _stream_handler: Handler for streaming graph execution.
        _checkpointer: Optional state persistence backend.
        _publisher: Optional event publishing backend.
        _store: Optional data storage backend.
        _state_graph: Reference to the source StateGraph.
        _interrupt_before: Nodes where execution should pause before execution.
        _interrupt_after: Nodes where execution should pause after execution.
        _task_manager: Manager for background async tasks.

    Example:
        ```python
        # After building and compiling a StateGraph
        compiled = graph.compile()

        # Synchronous execution
        result = compiled.invoke({"messages": [Message.text_message("Hello")]})

        # Asynchronous execution with streaming
        async for chunk in compiled.astream({"messages": [message]}):
            print(f"Streamed: {chunk.content}")

        # Graceful cleanup
        await compiled.aclose()
        ```

    Note:
        CompiledGraph instances should be properly closed using aclose() to
        release resources like database connections, background tasks, and
        event publishers.
    """

    def __init__(
        self,
        state: StateT,
        checkpointer: BaseCheckpointer[StateT] | None,
        publisher: BasePublisher | None,
        store: BaseStore | None,
        state_graph: StateGraph[StateT],
        interrupt_before: list[str],
        interrupt_after: list[str],
        task_manager: BackgroundTaskManager,
    ):
        logger.info(
            f"Initializing CompiledGraph with nodes: {list(state_graph.nodes.keys())}",
        )

        # Save initial state
        self._state = state

        # create handlers
        self._invoke_handler: InvokeHandler[StateT] = InvokeHandler[StateT](
            nodes=state_graph.nodes,  # type: ignore
            edges=state_graph.edges,  # type: ignore
        )
        self._stream_handler: StreamHandler[StateT] = StreamHandler[StateT](
            nodes=state_graph.nodes,  # type: ignore
            edges=state_graph.edges,  # type: ignore
        )

        self._checkpointer: BaseCheckpointer[StateT] | None = checkpointer
        self._publisher: BasePublisher | None = publisher
        self._store: BaseStore | None = store
        self._state_graph: StateGraph[StateT] = state_graph
        self._interrupt_before: list[str] = interrupt_before
        self._interrupt_after: list[str] = interrupt_after
        # generate task manager
        self._task_manager = task_manager

    def _prepare_config(
        self,
        config: dict[str, Any] | None,
        is_stream: bool = False,
    ) -> dict[str, Any]:
        cfg = config or {}
        if "is_stream" not in cfg:
            cfg["is_stream"] = is_stream
        if "user_id" not in cfg:
            cfg["user_id"] = "test-user-id"  # mock user id
        if "run_id" not in cfg:
            cfg["run_id"] = InjectQ.get_instance().try_get("generated_id") or str(uuid4())

        if "timestamp" not in cfg:
            cfg["timestamp"] = datetime.datetime.now().isoformat()

        return cfg

    def invoke(
        self,
        input_data: dict[str, Any],
        config: dict[str, Any] | None = None,
        response_granularity: ResponseGranularity = ResponseGranularity.LOW,
    ) -> dict[str, Any]:
        """Execute the graph synchronously and return the final results.

        Runs the complete graph workflow from start to finish, handling state
        management, node execution, and result formatting. This method automatically
        detects whether to start a fresh execution or resume from an interrupted state.

        The execution is synchronous but internally uses async operations, making it
        suitable for use in non-async contexts while still benefiting from async
        capabilities for I/O operations.

        Args:
            input_data: Input dictionary for graph execution. For new executions,
                should contain 'messages' key with list of initial messages.
                For resumed executions, can contain additional data to merge.
            config: Optional configuration dictionary containing execution settings:
                - user_id: Identifier for the user/session
                - thread_id: Unique identifier for this execution thread
                - run_id: Unique identifier for this specific run
                - recursion_limit: Maximum steps before stopping (default: 25)
            response_granularity: Level of detail in the response:
                - LOW: Returns only messages (default)
                - PARTIAL: Returns context, summary, and messages
                - FULL: Returns complete state and messages

        Returns:
            Dictionary containing execution results formatted according to the
            specified granularity level. Always includes execution messages
            and may include additional state information.

        Raises:
            ValueError: If input_data is invalid for new execution.
            GraphRecursionError: If execution exceeds recursion limit.
            Various exceptions: Depending on node execution failures.

        Example:
            ```python
            # Basic execution
            result = compiled.invoke({"messages": [Message.text_message("Process this data")]})
            print(result["messages"])  # Final execution messages

            # With configuration and full details
            result = compiled.invoke(
                input_data={"messages": [message]},
                config={"user_id": "user123", "thread_id": "session456", "recursion_limit": 50},
                response_granularity=ResponseGranularity.FULL,
            )
            print(result["state"])  # Complete final state
            ```

        Note:
            This method uses asyncio.run() internally, so it should not be called
            from within an async context. Use ainvoke() instead for async execution.
        """
        logger.info(
            "Starting synchronous graph execution with %d input keys, granularity=%s",
            len(input_data) if input_data else 0,
            response_granularity,
        )
        logger.debug("Input data keys: %s", list(input_data.keys()) if input_data else [])
        # Async Will Handle Event Publish

        try:
            result = asyncio.run(self.ainvoke(input_data, config, response_granularity))
            logger.info("Synchronous graph execution completed successfully")
            return result
        except Exception as e:
            logger.exception("Synchronous graph execution failed: %s", e)
            raise

    async def ainvoke(
        self,
        input_data: dict[str, Any],
        config: dict[str, Any] | None = None,
        response_granularity: ResponseGranularity = ResponseGranularity.LOW,
    ) -> dict[str, Any]:
        """Execute the graph asynchronously.

        Auto-detects whether to start fresh execution or resume from interrupted state
        based on the AgentState's execution metadata.

        Args:
            input_data: Input dict with 'messages' key (for new execution) or
                       additional data for resuming
            config: Configuration dictionary
            response_granularity: Response parsing granularity

        Returns:
            Response dict based on granularity
        """
        cfg = self._prepare_config(config, is_stream=False)

        return await self._invoke_handler.invoke(
            input_data,
            cfg,
            self._state,
            response_granularity,
        )

    def stop(self, config: dict[str, Any]) -> dict[str, Any]:
        """Request the current graph execution to stop (sync helper).

        This sets a stop flag in the checkpointer's thread store keyed by thread_id.
        Handlers periodically check this flag and interrupt execution.
        Returns a small status dict.
        """
        return asyncio.run(self.astop(config))

    async def astop(self, config: dict[str, Any]) -> dict[str, Any]:
        """Request the current graph execution to stop (async).

        Contract:
        - Requires a valid thread_id in config
        - If no active thread or no checkpointer, returns not-running
        - If state exists and is running, set stop_requested flag in thread info
        """
        cfg = self._prepare_config(config, is_stream=bool(config.get("is_stream", False)))
        if not self._checkpointer:
            return {"ok": False, "reason": "no-checkpointer"}

        # Load state to see if this thread is running
        state = await self._checkpointer.aget_state_cache(
            cfg
        ) or await self._checkpointer.aget_state(cfg)
        if not state:
            return {"ok": False, "running": False, "reason": "no-state"}

        running = state.is_running() and not state.is_interrupted()
        # Set stop flag regardless; handlers will act if running
        if running:
            state.execution_meta.stop_current_execution = StopRequestStatus.STOP_REQUESTED
            # update cache
            # Cache update is enough; state will be picked up by running execution
            # As its running, cache will be available immediately
            await self._checkpointer.aput_state_cache(cfg, state)
            # Fixme: consider putting to main state as well
            # await self._checkpointer.aput_state(cfg, state)
            logger.info("Set stop_current_execution flag for thread_id: %s", cfg.get("thread_id"))
            return {"ok": True, "running": running}

        logger.info(
            "No running execution to stop for thread_id: %s (running=%s, interrupted=%s)",
            cfg.get("thread_id"),
            running,
            state.is_interrupted(),
        )
        return {"ok": True, "running": running, "reason": "not-running"}

    def stream(
        self,
        input_data: dict[str, Any],
        config: dict[str, Any] | None = None,
        response_granularity: ResponseGranularity = ResponseGranularity.LOW,
    ) -> Generator[Message]:
        """Execute the graph synchronously with streaming support.

        Yields Message objects containing incremental responses.
        If nodes return streaming responses, yields them directly.
        If nodes return complete responses, simulates streaming by chunking.

        Args:
            input_data: Input dict
            config: Configuration dictionary
            response_granularity: Response parsing granularity

        Yields:
            Message objects with incremental content
        """

        # For sync streaming, we'll use asyncio.run to handle the async implementation
        async def _async_stream():
            async for chunk in self.astream(input_data, config, response_granularity):
                yield chunk

        # Convert async generator to sync iteration with a dedicated event loop
        gen = _async_stream()
        loop = asyncio.new_event_loop()
        policy = asyncio.get_event_loop_policy()
        try:
            previous_loop = policy.get_event_loop()
        except Exception:
            previous_loop = None
        asyncio.set_event_loop(loop)
        logger.info("Synchronous streaming started")

        try:
            while True:
                try:
                    chunk = loop.run_until_complete(gen.__anext__())
                    yield chunk
                except StopAsyncIteration:
                    break
        finally:
            # Attempt to close the async generator cleanly
            with contextlib.suppress(Exception):
                loop.run_until_complete(gen.aclose())  # type: ignore[attr-defined]
            # Restore previous loop if any, then close created loop
            try:
                if previous_loop is not None:
                    asyncio.set_event_loop(previous_loop)
            finally:
                loop.close()
        logger.info("Synchronous streaming completed")

    async def astream(
        self,
        input_data: dict[str, Any],
        config: dict[str, Any] | None = None,
        response_granularity: ResponseGranularity = ResponseGranularity.LOW,
    ) -> AsyncIterator[Message]:
        """Execute the graph asynchronously with streaming support.

        Yields Message objects containing incremental responses.
        If nodes return streaming responses, yields them directly.
        If nodes return complete responses, simulates streaming by chunking.

        Args:
            input_data: Input dict
            config: Configuration dictionary
            response_granularity: Response parsing granularity

        Yields:
            Message objects with incremental content
        """

        cfg = self._prepare_config(config, is_stream=True)

        async for chunk in self._stream_handler.stream(
            input_data,
            cfg,
            self._state,
            response_granularity,
        ):
            yield chunk

    async def aclose(self) -> dict[str, str]:
        """Close the graph and release any resources."""
        # close checkpointer
        stats = {}
        try:
            if self._checkpointer:
                await self._checkpointer.arelease()
                logger.info("Checkpointer closed successfully")
                stats["checkpointer"] = "closed"
        except Exception as e:
            stats["checkpointer"] = f"error: {e}"
            logger.error(f"Error closing graph: {e}")

        # Close Publisher
        try:
            if self._publisher:
                await self._publisher.close()
                logger.info("Publisher closed successfully")
                stats["publisher"] = "closed"
        except Exception as e:
            stats["publisher"] = f"error: {e}"
            logger.error(f"Error closing publisher: {e}")

        # Close Store
        try:
            if self._store:
                await self._store.arelease()
                logger.info("Store closed successfully")
                stats["store"] = "closed"
        except Exception as e:
            stats["store"] = f"error: {e}"
            logger.error(f"Error closing store: {e}")

        # Wait for all background tasks to complete
        try:
            await self._task_manager.wait_for_all()
            logger.info("All background tasks completed successfully")
            stats["background_tasks"] = "completed"
        except Exception as e:
            stats["background_tasks"] = f"error: {e}"
            logger.error(f"Error waiting for background tasks: {e}")

        logger.info(f"Graph close stats: {stats}")
        # You can also return or process the stats as needed
        return stats

    def generate_graph(self) -> dict[str, Any]:
        """Generate the graph representation.

        Returns:
            A dictionary representing the graph structure.
        """
        graph = {
            "info": {},
            "nodes": [],
            "edges": [],
        }
        # Populate the graph with nodes and edges
        for node_name in self._state_graph.nodes:
            graph["nodes"].append(
                {
                    "id": str(uuid4()),
                    "name": node_name,
                }
            )

        for edge in self._state_graph.edges:
            graph["edges"].append(
                {
                    "id": str(uuid4()),
                    "source": edge.from_node,
                    "target": edge.to_node,
                }
            )

        # Add few more extra info
        graph["info"] = {
            "node_count": len(graph["nodes"]),
            "edge_count": len(graph["edges"]),
            "checkpointer": self._checkpointer is not None,
            "checkpointer_type": type(self._checkpointer).__name__ if self._checkpointer else None,
            "publisher": self._publisher is not None,
            "store": self._store is not None,
            "interrupt_before": self._interrupt_before,
            "interrupt_after": self._interrupt_after,
            "context_type": self._state_graph._context_manager.__class__.__name__,
            "id_generator": self._state_graph._id_generator.__class__.__name__,
            "id_type": self._state_graph._id_generator.id_type.value,
            "state_type": self._state.__class__.__name__,
            "state_fields": list(self._state.model_dump().keys()),
        }
        return graph

Functions

__init__
__init__(state, checkpointer, publisher, store, state_graph, interrupt_before, interrupt_after, task_manager)
Source code in pyagenity/graph/compiled_graph.py
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
def __init__(
    self,
    state: StateT,
    checkpointer: BaseCheckpointer[StateT] | None,
    publisher: BasePublisher | None,
    store: BaseStore | None,
    state_graph: StateGraph[StateT],
    interrupt_before: list[str],
    interrupt_after: list[str],
    task_manager: BackgroundTaskManager,
):
    logger.info(
        f"Initializing CompiledGraph with nodes: {list(state_graph.nodes.keys())}",
    )

    # Save initial state
    self._state = state

    # create handlers
    self._invoke_handler: InvokeHandler[StateT] = InvokeHandler[StateT](
        nodes=state_graph.nodes,  # type: ignore
        edges=state_graph.edges,  # type: ignore
    )
    self._stream_handler: StreamHandler[StateT] = StreamHandler[StateT](
        nodes=state_graph.nodes,  # type: ignore
        edges=state_graph.edges,  # type: ignore
    )

    self._checkpointer: BaseCheckpointer[StateT] | None = checkpointer
    self._publisher: BasePublisher | None = publisher
    self._store: BaseStore | None = store
    self._state_graph: StateGraph[StateT] = state_graph
    self._interrupt_before: list[str] = interrupt_before
    self._interrupt_after: list[str] = interrupt_after
    # generate task manager
    self._task_manager = task_manager
aclose async
aclose()

Close the graph and release any resources.

Source code in pyagenity/graph/compiled_graph.py
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
async def aclose(self) -> dict[str, str]:
    """Close the graph and release any resources."""
    # close checkpointer
    stats = {}
    try:
        if self._checkpointer:
            await self._checkpointer.arelease()
            logger.info("Checkpointer closed successfully")
            stats["checkpointer"] = "closed"
    except Exception as e:
        stats["checkpointer"] = f"error: {e}"
        logger.error(f"Error closing graph: {e}")

    # Close Publisher
    try:
        if self._publisher:
            await self._publisher.close()
            logger.info("Publisher closed successfully")
            stats["publisher"] = "closed"
    except Exception as e:
        stats["publisher"] = f"error: {e}"
        logger.error(f"Error closing publisher: {e}")

    # Close Store
    try:
        if self._store:
            await self._store.arelease()
            logger.info("Store closed successfully")
            stats["store"] = "closed"
    except Exception as e:
        stats["store"] = f"error: {e}"
        logger.error(f"Error closing store: {e}")

    # Wait for all background tasks to complete
    try:
        await self._task_manager.wait_for_all()
        logger.info("All background tasks completed successfully")
        stats["background_tasks"] = "completed"
    except Exception as e:
        stats["background_tasks"] = f"error: {e}"
        logger.error(f"Error waiting for background tasks: {e}")

    logger.info(f"Graph close stats: {stats}")
    # You can also return or process the stats as needed
    return stats
ainvoke async
ainvoke(input_data, config=None, response_granularity=ResponseGranularity.LOW)

Execute the graph asynchronously.

Auto-detects whether to start fresh execution or resume from interrupted state based on the AgentState's execution metadata.

Parameters:

Name Type Description Default
input_data
dict[str, Any]

Input dict with 'messages' key (for new execution) or additional data for resuming

required
config
dict[str, Any] | None

Configuration dictionary

None
response_granularity
ResponseGranularity

Response parsing granularity

LOW

Returns:

Type Description
dict[str, Any]

Response dict based on granularity

Source code in pyagenity/graph/compiled_graph.py
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
async def ainvoke(
    self,
    input_data: dict[str, Any],
    config: dict[str, Any] | None = None,
    response_granularity: ResponseGranularity = ResponseGranularity.LOW,
) -> dict[str, Any]:
    """Execute the graph asynchronously.

    Auto-detects whether to start fresh execution or resume from interrupted state
    based on the AgentState's execution metadata.

    Args:
        input_data: Input dict with 'messages' key (for new execution) or
                   additional data for resuming
        config: Configuration dictionary
        response_granularity: Response parsing granularity

    Returns:
        Response dict based on granularity
    """
    cfg = self._prepare_config(config, is_stream=False)

    return await self._invoke_handler.invoke(
        input_data,
        cfg,
        self._state,
        response_granularity,
    )
astop async
astop(config)

Request the current graph execution to stop (async).

Contract: - Requires a valid thread_id in config - If no active thread or no checkpointer, returns not-running - If state exists and is running, set stop_requested flag in thread info

Source code in pyagenity/graph/compiled_graph.py
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
async def astop(self, config: dict[str, Any]) -> dict[str, Any]:
    """Request the current graph execution to stop (async).

    Contract:
    - Requires a valid thread_id in config
    - If no active thread or no checkpointer, returns not-running
    - If state exists and is running, set stop_requested flag in thread info
    """
    cfg = self._prepare_config(config, is_stream=bool(config.get("is_stream", False)))
    if not self._checkpointer:
        return {"ok": False, "reason": "no-checkpointer"}

    # Load state to see if this thread is running
    state = await self._checkpointer.aget_state_cache(
        cfg
    ) or await self._checkpointer.aget_state(cfg)
    if not state:
        return {"ok": False, "running": False, "reason": "no-state"}

    running = state.is_running() and not state.is_interrupted()
    # Set stop flag regardless; handlers will act if running
    if running:
        state.execution_meta.stop_current_execution = StopRequestStatus.STOP_REQUESTED
        # update cache
        # Cache update is enough; state will be picked up by running execution
        # As its running, cache will be available immediately
        await self._checkpointer.aput_state_cache(cfg, state)
        # Fixme: consider putting to main state as well
        # await self._checkpointer.aput_state(cfg, state)
        logger.info("Set stop_current_execution flag for thread_id: %s", cfg.get("thread_id"))
        return {"ok": True, "running": running}

    logger.info(
        "No running execution to stop for thread_id: %s (running=%s, interrupted=%s)",
        cfg.get("thread_id"),
        running,
        state.is_interrupted(),
    )
    return {"ok": True, "running": running, "reason": "not-running"}
astream async
astream(input_data, config=None, response_granularity=ResponseGranularity.LOW)

Execute the graph asynchronously with streaming support.

Yields Message objects containing incremental responses. If nodes return streaming responses, yields them directly. If nodes return complete responses, simulates streaming by chunking.

Parameters:

Name Type Description Default
input_data
dict[str, Any]

Input dict

required
config
dict[str, Any] | None

Configuration dictionary

None
response_granularity
ResponseGranularity

Response parsing granularity

LOW

Yields:

Type Description
AsyncIterator[Message]

Message objects with incremental content

Source code in pyagenity/graph/compiled_graph.py
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
async def astream(
    self,
    input_data: dict[str, Any],
    config: dict[str, Any] | None = None,
    response_granularity: ResponseGranularity = ResponseGranularity.LOW,
) -> AsyncIterator[Message]:
    """Execute the graph asynchronously with streaming support.

    Yields Message objects containing incremental responses.
    If nodes return streaming responses, yields them directly.
    If nodes return complete responses, simulates streaming by chunking.

    Args:
        input_data: Input dict
        config: Configuration dictionary
        response_granularity: Response parsing granularity

    Yields:
        Message objects with incremental content
    """

    cfg = self._prepare_config(config, is_stream=True)

    async for chunk in self._stream_handler.stream(
        input_data,
        cfg,
        self._state,
        response_granularity,
    ):
        yield chunk
generate_graph
generate_graph()

Generate the graph representation.

Returns:

Type Description
dict[str, Any]

A dictionary representing the graph structure.

Source code in pyagenity/graph/compiled_graph.py
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
def generate_graph(self) -> dict[str, Any]:
    """Generate the graph representation.

    Returns:
        A dictionary representing the graph structure.
    """
    graph = {
        "info": {},
        "nodes": [],
        "edges": [],
    }
    # Populate the graph with nodes and edges
    for node_name in self._state_graph.nodes:
        graph["nodes"].append(
            {
                "id": str(uuid4()),
                "name": node_name,
            }
        )

    for edge in self._state_graph.edges:
        graph["edges"].append(
            {
                "id": str(uuid4()),
                "source": edge.from_node,
                "target": edge.to_node,
            }
        )

    # Add few more extra info
    graph["info"] = {
        "node_count": len(graph["nodes"]),
        "edge_count": len(graph["edges"]),
        "checkpointer": self._checkpointer is not None,
        "checkpointer_type": type(self._checkpointer).__name__ if self._checkpointer else None,
        "publisher": self._publisher is not None,
        "store": self._store is not None,
        "interrupt_before": self._interrupt_before,
        "interrupt_after": self._interrupt_after,
        "context_type": self._state_graph._context_manager.__class__.__name__,
        "id_generator": self._state_graph._id_generator.__class__.__name__,
        "id_type": self._state_graph._id_generator.id_type.value,
        "state_type": self._state.__class__.__name__,
        "state_fields": list(self._state.model_dump().keys()),
    }
    return graph
invoke
invoke(input_data, config=None, response_granularity=ResponseGranularity.LOW)

Execute the graph synchronously and return the final results.

Runs the complete graph workflow from start to finish, handling state management, node execution, and result formatting. This method automatically detects whether to start a fresh execution or resume from an interrupted state.

The execution is synchronous but internally uses async operations, making it suitable for use in non-async contexts while still benefiting from async capabilities for I/O operations.

Parameters:

Name Type Description Default
input_data
dict[str, Any]

Input dictionary for graph execution. For new executions, should contain 'messages' key with list of initial messages. For resumed executions, can contain additional data to merge.

required
config
dict[str, Any] | None

Optional configuration dictionary containing execution settings: - user_id: Identifier for the user/session - thread_id: Unique identifier for this execution thread - run_id: Unique identifier for this specific run - recursion_limit: Maximum steps before stopping (default: 25)

None
response_granularity
ResponseGranularity

Level of detail in the response: - LOW: Returns only messages (default) - PARTIAL: Returns context, summary, and messages - FULL: Returns complete state and messages

LOW

Returns:

Type Description
dict[str, Any]

Dictionary containing execution results formatted according to the

dict[str, Any]

specified granularity level. Always includes execution messages

dict[str, Any]

and may include additional state information.

Raises:

Type Description
ValueError

If input_data is invalid for new execution.

GraphRecursionError

If execution exceeds recursion limit.

Various exceptions

Depending on node execution failures.

Example
# Basic execution
result = compiled.invoke({"messages": [Message.text_message("Process this data")]})
print(result["messages"])  # Final execution messages

# With configuration and full details
result = compiled.invoke(
    input_data={"messages": [message]},
    config={"user_id": "user123", "thread_id": "session456", "recursion_limit": 50},
    response_granularity=ResponseGranularity.FULL,
)
print(result["state"])  # Complete final state
Note

This method uses asyncio.run() internally, so it should not be called from within an async context. Use ainvoke() instead for async execution.

Source code in pyagenity/graph/compiled_graph.py
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
def invoke(
    self,
    input_data: dict[str, Any],
    config: dict[str, Any] | None = None,
    response_granularity: ResponseGranularity = ResponseGranularity.LOW,
) -> dict[str, Any]:
    """Execute the graph synchronously and return the final results.

    Runs the complete graph workflow from start to finish, handling state
    management, node execution, and result formatting. This method automatically
    detects whether to start a fresh execution or resume from an interrupted state.

    The execution is synchronous but internally uses async operations, making it
    suitable for use in non-async contexts while still benefiting from async
    capabilities for I/O operations.

    Args:
        input_data: Input dictionary for graph execution. For new executions,
            should contain 'messages' key with list of initial messages.
            For resumed executions, can contain additional data to merge.
        config: Optional configuration dictionary containing execution settings:
            - user_id: Identifier for the user/session
            - thread_id: Unique identifier for this execution thread
            - run_id: Unique identifier for this specific run
            - recursion_limit: Maximum steps before stopping (default: 25)
        response_granularity: Level of detail in the response:
            - LOW: Returns only messages (default)
            - PARTIAL: Returns context, summary, and messages
            - FULL: Returns complete state and messages

    Returns:
        Dictionary containing execution results formatted according to the
        specified granularity level. Always includes execution messages
        and may include additional state information.

    Raises:
        ValueError: If input_data is invalid for new execution.
        GraphRecursionError: If execution exceeds recursion limit.
        Various exceptions: Depending on node execution failures.

    Example:
        ```python
        # Basic execution
        result = compiled.invoke({"messages": [Message.text_message("Process this data")]})
        print(result["messages"])  # Final execution messages

        # With configuration and full details
        result = compiled.invoke(
            input_data={"messages": [message]},
            config={"user_id": "user123", "thread_id": "session456", "recursion_limit": 50},
            response_granularity=ResponseGranularity.FULL,
        )
        print(result["state"])  # Complete final state
        ```

    Note:
        This method uses asyncio.run() internally, so it should not be called
        from within an async context. Use ainvoke() instead for async execution.
    """
    logger.info(
        "Starting synchronous graph execution with %d input keys, granularity=%s",
        len(input_data) if input_data else 0,
        response_granularity,
    )
    logger.debug("Input data keys: %s", list(input_data.keys()) if input_data else [])
    # Async Will Handle Event Publish

    try:
        result = asyncio.run(self.ainvoke(input_data, config, response_granularity))
        logger.info("Synchronous graph execution completed successfully")
        return result
    except Exception as e:
        logger.exception("Synchronous graph execution failed: %s", e)
        raise
stop
stop(config)

Request the current graph execution to stop (sync helper).

This sets a stop flag in the checkpointer's thread store keyed by thread_id. Handlers periodically check this flag and interrupt execution. Returns a small status dict.

Source code in pyagenity/graph/compiled_graph.py
251
252
253
254
255
256
257
258
def stop(self, config: dict[str, Any]) -> dict[str, Any]:
    """Request the current graph execution to stop (sync helper).

    This sets a stop flag in the checkpointer's thread store keyed by thread_id.
    Handlers periodically check this flag and interrupt execution.
    Returns a small status dict.
    """
    return asyncio.run(self.astop(config))
stream
stream(input_data, config=None, response_granularity=ResponseGranularity.LOW)

Execute the graph synchronously with streaming support.

Yields Message objects containing incremental responses. If nodes return streaming responses, yields them directly. If nodes return complete responses, simulates streaming by chunking.

Parameters:

Name Type Description Default
input_data
dict[str, Any]

Input dict

required
config
dict[str, Any] | None

Configuration dictionary

None
response_granularity
ResponseGranularity

Response parsing granularity

LOW

Yields:

Type Description
Generator[Message]

Message objects with incremental content

Source code in pyagenity/graph/compiled_graph.py
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
def stream(
    self,
    input_data: dict[str, Any],
    config: dict[str, Any] | None = None,
    response_granularity: ResponseGranularity = ResponseGranularity.LOW,
) -> Generator[Message]:
    """Execute the graph synchronously with streaming support.

    Yields Message objects containing incremental responses.
    If nodes return streaming responses, yields them directly.
    If nodes return complete responses, simulates streaming by chunking.

    Args:
        input_data: Input dict
        config: Configuration dictionary
        response_granularity: Response parsing granularity

    Yields:
        Message objects with incremental content
    """

    # For sync streaming, we'll use asyncio.run to handle the async implementation
    async def _async_stream():
        async for chunk in self.astream(input_data, config, response_granularity):
            yield chunk

    # Convert async generator to sync iteration with a dedicated event loop
    gen = _async_stream()
    loop = asyncio.new_event_loop()
    policy = asyncio.get_event_loop_policy()
    try:
        previous_loop = policy.get_event_loop()
    except Exception:
        previous_loop = None
    asyncio.set_event_loop(loop)
    logger.info("Synchronous streaming started")

    try:
        while True:
            try:
                chunk = loop.run_until_complete(gen.__anext__())
                yield chunk
            except StopAsyncIteration:
                break
    finally:
        # Attempt to close the async generator cleanly
        with contextlib.suppress(Exception):
            loop.run_until_complete(gen.aclose())  # type: ignore[attr-defined]
        # Restore previous loop if any, then close created loop
        try:
            if previous_loop is not None:
                asyncio.set_event_loop(previous_loop)
        finally:
            loop.close()
    logger.info("Synchronous streaming completed")

Edge

Represents a connection between two nodes in a graph workflow.

An Edge defines the relationship and routing logic between nodes, specifying how execution should flow from one node to another. Edges can be either static (unconditional) or conditional based on runtime state evaluation.

Edges support complex routing scenarios including: - Simple static connections between nodes - Conditional routing based on state evaluation - Dynamic routing with multiple possible destinations - Decision trees and branching logic

Attributes:

Name Type Description
from_node

Name of the source node where execution originates.

to_node

Name of the destination node where execution continues.

condition

Optional callable that determines if this edge should be followed. If None, the edge is always followed (static edge).

condition_result str | None

Optional value to match against condition result for mapped conditional edges.

Example
# Static edge - always followed
static_edge = Edge("start", "process")


# Conditional edge - followed only if condition returns True
def needs_approval(state):
    return state.data.get("requires_approval", False)


conditional_edge = Edge("process", "approval", condition=needs_approval)


# Mapped conditional edge - follows based on specific condition result
def get_priority(state):
    return state.data.get("priority", "normal")


high_priority_edge = Edge("triage", "urgent", condition=get_priority)
high_priority_edge.condition_result = "high"

Methods:

Name Description
__init__

Initialize a new Edge with source, destination, and optional condition.

Source code in pyagenity/graph/edge.py
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
class Edge:
    """Represents a connection between two nodes in a graph workflow.

    An Edge defines the relationship and routing logic between nodes, specifying
    how execution should flow from one node to another. Edges can be either
    static (unconditional) or conditional based on runtime state evaluation.

    Edges support complex routing scenarios including:
    - Simple static connections between nodes
    - Conditional routing based on state evaluation
    - Dynamic routing with multiple possible destinations
    - Decision trees and branching logic

    Attributes:
        from_node: Name of the source node where execution originates.
        to_node: Name of the destination node where execution continues.
        condition: Optional callable that determines if this edge should be
            followed. If None, the edge is always followed (static edge).
        condition_result: Optional value to match against condition result
            for mapped conditional edges.

    Example:
        ```python
        # Static edge - always followed
        static_edge = Edge("start", "process")


        # Conditional edge - followed only if condition returns True
        def needs_approval(state):
            return state.data.get("requires_approval", False)


        conditional_edge = Edge("process", "approval", condition=needs_approval)


        # Mapped conditional edge - follows based on specific condition result
        def get_priority(state):
            return state.data.get("priority", "normal")


        high_priority_edge = Edge("triage", "urgent", condition=get_priority)
        high_priority_edge.condition_result = "high"
        ```
    """

    def __init__(
        self,
        from_node: str,
        to_node: str,
        condition: Callable | None = None,
    ):
        """Initialize a new Edge with source, destination, and optional condition.

        Args:
            from_node: Name of the source node. Must match a node name in the graph.
            to_node: Name of the destination node. Must match a node name in the graph
                or be a special constant like END.
            condition: Optional callable that takes an AgentState as argument and
                returns a value to determine if this edge should be followed.
                If None, this is a static edge that's always followed.

        Note:
            The condition function should be deterministic and side-effect free
            for predictable execution behavior. It receives the current AgentState
            and should return a boolean (for simple conditions) or a string/value
            (for mapped conditional routing).
        """
        logger.debug(
            "Creating edge from '%s' to '%s' with condition=%s",
            from_node,
            to_node,
            "yes" if condition else "no",
        )
        self.from_node = from_node
        self.to_node = to_node
        self.condition = condition
        self.condition_result: str | None = None

Attributes

condition instance-attribute
condition = condition
condition_result instance-attribute
condition_result = None
from_node instance-attribute
from_node = from_node
to_node instance-attribute
to_node = to_node

Functions

__init__
__init__(from_node, to_node, condition=None)

Initialize a new Edge with source, destination, and optional condition.

Parameters:

Name Type Description Default
from_node
str

Name of the source node. Must match a node name in the graph.

required
to_node
str

Name of the destination node. Must match a node name in the graph or be a special constant like END.

required
condition
Callable | None

Optional callable that takes an AgentState as argument and returns a value to determine if this edge should be followed. If None, this is a static edge that's always followed.

None
Note

The condition function should be deterministic and side-effect free for predictable execution behavior. It receives the current AgentState and should return a boolean (for simple conditions) or a string/value (for mapped conditional routing).

Source code in pyagenity/graph/edge.py
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
def __init__(
    self,
    from_node: str,
    to_node: str,
    condition: Callable | None = None,
):
    """Initialize a new Edge with source, destination, and optional condition.

    Args:
        from_node: Name of the source node. Must match a node name in the graph.
        to_node: Name of the destination node. Must match a node name in the graph
            or be a special constant like END.
        condition: Optional callable that takes an AgentState as argument and
            returns a value to determine if this edge should be followed.
            If None, this is a static edge that's always followed.

    Note:
        The condition function should be deterministic and side-effect free
        for predictable execution behavior. It receives the current AgentState
        and should return a boolean (for simple conditions) or a string/value
        (for mapped conditional routing).
    """
    logger.debug(
        "Creating edge from '%s' to '%s' with condition=%s",
        from_node,
        to_node,
        "yes" if condition else "no",
    )
    self.from_node = from_node
    self.to_node = to_node
    self.condition = condition
    self.condition_result: str | None = None

Node

Represents a node in the graph workflow.

A Node encapsulates a function or ToolNode that can be executed as part of a graph workflow. It handles dependency injection, parameter mapping, and execution context management.

The Node class supports both regular callable functions and ToolNode instances for handling tool-based operations. It automatically injects dependencies based on function signatures and provides legacy parameter support.

Attributes:

Name Type Description
name str

Unique identifier for the node within the graph.

func Union[Callable, ToolNode]

The function or ToolNode to execute.

Example

def my_function(state, config): ... return {"result": "processed"} node = Node("processor", my_function) result = await node.execute(state, config)

Methods:

Name Description
__init__

Initialize a new Node instance with function and dependencies.

execute

Execute the node function with comprehensive context and callback support.

stream

Execute the node function with streaming output support.

Source code in pyagenity/graph/node.py
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
class Node:
    """Represents a node in the graph workflow.

    A Node encapsulates a function or ToolNode that can be executed as part of
    a graph workflow. It handles dependency injection, parameter mapping, and
    execution context management.

    The Node class supports both regular callable functions and ToolNode instances
    for handling tool-based operations. It automatically injects dependencies
    based on function signatures and provides legacy parameter support.

    Attributes:
        name (str): Unique identifier for the node within the graph.
        func (Union[Callable, ToolNode]): The function or ToolNode to execute.

    Example:
        >>> def my_function(state, config):
        ...     return {"result": "processed"}
        >>> node = Node("processor", my_function)
        >>> result = await node.execute(state, config)
    """

    def __init__(
        self,
        name: str,
        func: Union[Callable, "ToolNode"],
        publisher: BasePublisher | None = Inject[BasePublisher],
    ):
        """Initialize a new Node instance with function and dependencies.

        Args:
            name: Unique identifier for the node within the graph. This name
                is used for routing, logging, and referencing the node in
                graph configuration.
            func: The function or ToolNode to execute when this node is called.
                Functions should accept at least 'state' and 'config' parameters.
                ToolNode instances handle tool-based operations and provide
                their own execution logic.
            publisher: Optional event publisher for execution monitoring.
                Injected via dependency injection if not explicitly provided.
                Used for publishing node execution events and status updates.

        Note:
            The function signature is automatically analyzed to determine
            required parameters and dependency injection points. Parameters
            matching injectable service names will be automatically provided
            by the framework during execution.
        """
        logger.debug(
            "Initializing node '%s' with func=%s",
            name,
            getattr(func, "__name__", type(func).__name__),
        )
        self.name = name
        self.func = func
        self.publisher = publisher
        self.invoke_handler = InvokeNodeHandler(
            name,
            func,
        )

        self.stream_handler = StreamNodeHandler(
            name,
            func,
        )

    async def execute(
        self,
        config: dict[str, Any],
        state: AgentState,
        callback_mgr: CallbackManager = Inject[CallbackManager],
    ) -> dict[str, Any] | list[Message]:
        """Execute the node function with comprehensive context and callback support.

        Executes the node's function or ToolNode with full dependency injection,
        callback hook execution, and error handling. This method provides the
        complete execution environment including state access, configuration,
        and injected services.

        Args:
            config: Configuration dictionary containing execution context,
                user settings, thread identification, and runtime parameters.
            state: Current AgentState providing workflow context, message history,
                and shared state information accessible to the node function.
            callback_mgr: Callback manager for executing pre/post execution hooks.
                Injected via dependency injection if not explicitly provided.

        Returns:
            Either a dictionary containing updated state and execution results,
            or a list of Message objects representing the node's output.
            The return type depends on the node function's implementation.

        Raises:
            Various exceptions depending on node function behavior. All exceptions
            are handled by the callback manager's error handling hooks before
            being propagated.

        Example:
            ```python
            # Node function that returns messages
            def process_data(state, config):
                result = process(state.data)
                return [Message.text_message(f"Processed: {result}")]


            node = Node("processor", process_data)
            messages = await node.execute(config, state)
            ```

        Note:
            The node function receives dependency-injected parameters based on
            its signature. Common injectable parameters include 'state', 'config',
            'context_manager', 'publisher', and other framework services.
        """
        return await self.invoke_handler.invoke(
            config,
            state,
            callback_mgr,
        )

    async def stream(
        self,
        config: dict[str, Any],
        state: AgentState,
        callback_mgr: CallbackManager = Inject[CallbackManager],
    ) -> AsyncIterable[dict[str, Any] | Message]:
        """Execute the node function with streaming output support.

        Similar to execute() but designed for streaming scenarios where the node
        function can produce incremental results. This method provides an async
        iterator interface over the node's outputs, allowing for real-time
        processing and response streaming.

        Args:
            config: Configuration dictionary with execution context and settings.
            state: Current AgentState providing workflow context and shared state.
            callback_mgr: Callback manager for pre/post execution hook handling.

        Yields:
            Dictionary objects or Message instances representing incremental
            outputs from the node function. The exact type and frequency of
            yields depends on the node function's streaming implementation.

        Example:
            ```python
            async def streaming_processor(state, config):
                for item in large_dataset:
                    result = process_item(item)
                    yield Message.text_message(f"Processed item: {result}")


            node = Node("stream_processor", streaming_processor)
            async for output in node.stream(config, state):
                print(f"Streamed: {output.content}")
            ```

        Note:
            Not all node functions support streaming. For non-streaming functions,
            this method will yield a single result equivalent to calling execute().
            The streaming capability is determined by the node function's implementation.
        """
        result = self.stream_handler.stream(
            config,
            state,
            callback_mgr,
        )

        async for item in result:
            yield item

Attributes

func instance-attribute
func = func
invoke_handler instance-attribute
invoke_handler = InvokeNodeHandler(name, func)
name instance-attribute
name = name
publisher instance-attribute
publisher = publisher
stream_handler instance-attribute
stream_handler = StreamNodeHandler(name, func)

Functions

__init__
__init__(name, func, publisher=Inject[BasePublisher])

Initialize a new Node instance with function and dependencies.

Parameters:

Name Type Description Default
name
str

Unique identifier for the node within the graph. This name is used for routing, logging, and referencing the node in graph configuration.

required
func
Union[Callable, ToolNode]

The function or ToolNode to execute when this node is called. Functions should accept at least 'state' and 'config' parameters. ToolNode instances handle tool-based operations and provide their own execution logic.

required
publisher
BasePublisher | None

Optional event publisher for execution monitoring. Injected via dependency injection if not explicitly provided. Used for publishing node execution events and status updates.

Inject[BasePublisher]
Note

The function signature is automatically analyzed to determine required parameters and dependency injection points. Parameters matching injectable service names will be automatically provided by the framework during execution.

Source code in pyagenity/graph/node.py
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
def __init__(
    self,
    name: str,
    func: Union[Callable, "ToolNode"],
    publisher: BasePublisher | None = Inject[BasePublisher],
):
    """Initialize a new Node instance with function and dependencies.

    Args:
        name: Unique identifier for the node within the graph. This name
            is used for routing, logging, and referencing the node in
            graph configuration.
        func: The function or ToolNode to execute when this node is called.
            Functions should accept at least 'state' and 'config' parameters.
            ToolNode instances handle tool-based operations and provide
            their own execution logic.
        publisher: Optional event publisher for execution monitoring.
            Injected via dependency injection if not explicitly provided.
            Used for publishing node execution events and status updates.

    Note:
        The function signature is automatically analyzed to determine
        required parameters and dependency injection points. Parameters
        matching injectable service names will be automatically provided
        by the framework during execution.
    """
    logger.debug(
        "Initializing node '%s' with func=%s",
        name,
        getattr(func, "__name__", type(func).__name__),
    )
    self.name = name
    self.func = func
    self.publisher = publisher
    self.invoke_handler = InvokeNodeHandler(
        name,
        func,
    )

    self.stream_handler = StreamNodeHandler(
        name,
        func,
    )
execute async
execute(config, state, callback_mgr=Inject[CallbackManager])

Execute the node function with comprehensive context and callback support.

Executes the node's function or ToolNode with full dependency injection, callback hook execution, and error handling. This method provides the complete execution environment including state access, configuration, and injected services.

Parameters:

Name Type Description Default
config
dict[str, Any]

Configuration dictionary containing execution context, user settings, thread identification, and runtime parameters.

required
state
AgentState

Current AgentState providing workflow context, message history, and shared state information accessible to the node function.

required
callback_mgr
CallbackManager

Callback manager for executing pre/post execution hooks. Injected via dependency injection if not explicitly provided.

Inject[CallbackManager]

Returns:

Type Description
dict[str, Any] | list[Message]

Either a dictionary containing updated state and execution results,

dict[str, Any] | list[Message]

or a list of Message objects representing the node's output.

dict[str, Any] | list[Message]

The return type depends on the node function's implementation.

Example
# Node function that returns messages
def process_data(state, config):
    result = process(state.data)
    return [Message.text_message(f"Processed: {result}")]


node = Node("processor", process_data)
messages = await node.execute(config, state)
Note

The node function receives dependency-injected parameters based on its signature. Common injectable parameters include 'state', 'config', 'context_manager', 'publisher', and other framework services.

Source code in pyagenity/graph/node.py
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
async def execute(
    self,
    config: dict[str, Any],
    state: AgentState,
    callback_mgr: CallbackManager = Inject[CallbackManager],
) -> dict[str, Any] | list[Message]:
    """Execute the node function with comprehensive context and callback support.

    Executes the node's function or ToolNode with full dependency injection,
    callback hook execution, and error handling. This method provides the
    complete execution environment including state access, configuration,
    and injected services.

    Args:
        config: Configuration dictionary containing execution context,
            user settings, thread identification, and runtime parameters.
        state: Current AgentState providing workflow context, message history,
            and shared state information accessible to the node function.
        callback_mgr: Callback manager for executing pre/post execution hooks.
            Injected via dependency injection if not explicitly provided.

    Returns:
        Either a dictionary containing updated state and execution results,
        or a list of Message objects representing the node's output.
        The return type depends on the node function's implementation.

    Raises:
        Various exceptions depending on node function behavior. All exceptions
        are handled by the callback manager's error handling hooks before
        being propagated.

    Example:
        ```python
        # Node function that returns messages
        def process_data(state, config):
            result = process(state.data)
            return [Message.text_message(f"Processed: {result}")]


        node = Node("processor", process_data)
        messages = await node.execute(config, state)
        ```

    Note:
        The node function receives dependency-injected parameters based on
        its signature. Common injectable parameters include 'state', 'config',
        'context_manager', 'publisher', and other framework services.
    """
    return await self.invoke_handler.invoke(
        config,
        state,
        callback_mgr,
    )
stream async
stream(config, state, callback_mgr=Inject[CallbackManager])

Execute the node function with streaming output support.

Similar to execute() but designed for streaming scenarios where the node function can produce incremental results. This method provides an async iterator interface over the node's outputs, allowing for real-time processing and response streaming.

Parameters:

Name Type Description Default
config
dict[str, Any]

Configuration dictionary with execution context and settings.

required
state
AgentState

Current AgentState providing workflow context and shared state.

required
callback_mgr
CallbackManager

Callback manager for pre/post execution hook handling.

Inject[CallbackManager]

Yields:

Type Description
AsyncIterable[dict[str, Any] | Message]

Dictionary objects or Message instances representing incremental

AsyncIterable[dict[str, Any] | Message]

outputs from the node function. The exact type and frequency of

AsyncIterable[dict[str, Any] | Message]

yields depends on the node function's streaming implementation.

Example
async def streaming_processor(state, config):
    for item in large_dataset:
        result = process_item(item)
        yield Message.text_message(f"Processed item: {result}")


node = Node("stream_processor", streaming_processor)
async for output in node.stream(config, state):
    print(f"Streamed: {output.content}")
Note

Not all node functions support streaming. For non-streaming functions, this method will yield a single result equivalent to calling execute(). The streaming capability is determined by the node function's implementation.

Source code in pyagenity/graph/node.py
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
async def stream(
    self,
    config: dict[str, Any],
    state: AgentState,
    callback_mgr: CallbackManager = Inject[CallbackManager],
) -> AsyncIterable[dict[str, Any] | Message]:
    """Execute the node function with streaming output support.

    Similar to execute() but designed for streaming scenarios where the node
    function can produce incremental results. This method provides an async
    iterator interface over the node's outputs, allowing for real-time
    processing and response streaming.

    Args:
        config: Configuration dictionary with execution context and settings.
        state: Current AgentState providing workflow context and shared state.
        callback_mgr: Callback manager for pre/post execution hook handling.

    Yields:
        Dictionary objects or Message instances representing incremental
        outputs from the node function. The exact type and frequency of
        yields depends on the node function's streaming implementation.

    Example:
        ```python
        async def streaming_processor(state, config):
            for item in large_dataset:
                result = process_item(item)
                yield Message.text_message(f"Processed item: {result}")


        node = Node("stream_processor", streaming_processor)
        async for output in node.stream(config, state):
            print(f"Streamed: {output.content}")
        ```

    Note:
        Not all node functions support streaming. For non-streaming functions,
        this method will yield a single result equivalent to calling execute().
        The streaming capability is determined by the node function's implementation.
    """
    result = self.stream_handler.stream(
        config,
        state,
        callback_mgr,
    )

    async for item in result:
        yield item

StateGraph

Main graph class for orchestrating multi-agent workflows.

This class provides the core functionality for building and managing stateful agent workflows. It is similar to LangGraph's StateGraph integration with support for dependency injection.

The graph is generic over state types to support custom AgentState subclasses, allowing for type-safe state management throughout the workflow execution.

Attributes:

Name Type Description
state StateT

The current state of the graph workflow.

nodes dict[str, Node]

Collection of nodes in the graph.

edges list[Edge]

Collection of edges connecting nodes.

entry_point str | None

Name of the starting node for execution.

context_manager BaseContextManager[StateT] | None

Optional context manager for handling cross-node state operations.

dependency_container DependencyContainer

Container for managing dependencies that can be injected into node functions.

compiled bool

Whether the graph has been compiled for execution.

Example

graph = StateGraph() graph.add_node("process", process_function) graph.add_edge(START, "process") graph.add_edge("process", END) compiled = graph.compile() result = compiled.invoke({"input": "data"})

Methods:

Name Description
__init__

Initialize a new StateGraph instance.

add_conditional_edges

Add conditional routing between nodes based on runtime evaluation.

add_edge

Add a static edge between two nodes.

add_node

Add a node to the graph.

compile

Compile the graph for execution.

set_entry_point

Set the entry point for the graph.

Source code in pyagenity/graph/state_graph.py
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
class StateGraph[StateT: AgentState]:
    """Main graph class for orchestrating multi-agent workflows.

    This class provides the core functionality for building and managing stateful
    agent workflows. It is similar to LangGraph's StateGraph
    integration with support for dependency injection.

    The graph is generic over state types to support custom AgentState subclasses,
    allowing for type-safe state management throughout the workflow execution.

    Attributes:
        state (StateT): The current state of the graph workflow.
        nodes (dict[str, Node]): Collection of nodes in the graph.
        edges (list[Edge]): Collection of edges connecting nodes.
        entry_point (str | None): Name of the starting node for execution.
        context_manager (BaseContextManager[StateT] | None): Optional context manager
            for handling cross-node state operations.
        dependency_container (DependencyContainer): Container for managing
            dependencies that can be injected into node functions.
        compiled (bool): Whether the graph has been compiled for execution.

    Example:
        >>> graph = StateGraph()
        >>> graph.add_node("process", process_function)
        >>> graph.add_edge(START, "process")
        >>> graph.add_edge("process", END)
        >>> compiled = graph.compile()
        >>> result = compiled.invoke({"input": "data"})
    """

    def __init__(
        self,
        state: StateT | None = None,
        context_manager: BaseContextManager[StateT] | None = None,
        publisher: BasePublisher | None = None,
        id_generator: BaseIDGenerator = DefaultIDGenerator(),
        container: InjectQ | None = None,
        thread_name_generator: Callable[[], str] | None = None,
    ):
        """Initialize a new StateGraph instance.

        Args:
            state: Initial state for the graph. If None, a default AgentState
                will be created.
            context_manager: Optional context manager for handling cross-node
                state operations and advanced state management patterns.
            dependency_container: Container for managing dependencies that can
                be injected into node functions. If None, a new empty container
                will be created.
            publisher: Publisher for emitting events during execution

        Note:
            START and END nodes are automatically added to the graph upon
            initialization and accept the full node signature including
            dependencies.

        Example:
            # Basic usage with default AgentState
            >>> graph = StateGraph()

            # With custom state
            >>> custom_state = MyCustomState()
            >>> graph = StateGraph(custom_state)

            # Or using type hints for clarity
            >>> graph = StateGraph[MyCustomState](MyCustomState())
        """
        logger.info("Initializing StateGraph")
        logger.debug(
            "StateGraph init with state=%s, context_manager=%s",
            type(state).__name__ if state else "default AgentState",
            type(context_manager).__name__ if context_manager else None,
        )

        # State handling
        self._state: StateT = state if state else AgentState()  # type: ignore[assignment]

        # Graph structure
        self.nodes: dict[str, Node] = {}
        self.edges: list[Edge] = []
        self.entry_point: str | None = None

        # Services
        self._publisher: BasePublisher | None = publisher
        self._id_generator: BaseIDGenerator = id_generator
        self._context_manager: BaseContextManager[StateT] | None = context_manager
        self.thread_name_generator = thread_name_generator
        # save container for dependency injection
        # if any container is passed then we will activate that
        # otherwise we can skip it and use the default one
        if container is None:
            self._container = InjectQ.get_instance()
            logger.debug("No container provided, using global singleton instance")
        else:
            logger.debug("Using provided dependency container instance")
            self._container = container
            self._container.activate()

        # Register task_manager, for async tasks
        # This will be used to run background tasks
        self._task_manager = BackgroundTaskManager()

        # now setup the graph
        self._setup()

        # Add START and END nodes (accept full node signature including dependencies)
        logger.debug("Adding default START and END nodes")
        self.nodes[START] = Node(START, lambda state, config, **deps: state, self._publisher)  # type: ignore
        self.nodes[END] = Node(END, lambda state, config, **deps: state, self._publisher)
        logger.debug("StateGraph initialized with %d nodes", len(self.nodes))

    def _setup(self):
        """Setup the graph before compilation.

        This method can be used to perform any necessary setup or validation
        before compiling the graph for execution.
        """
        logger.info("Setting up StateGraph before compilation")
        # Placeholder for any setup logic needed before compilation
        # register dependencies

        # register state and context manager as singletons (these are nullable)
        self._container.bind_instance(
            BaseContextManager,
            self._context_manager,
            allow_none=True,
            allow_concrete=True,
        )
        self._container.bind_instance(
            BasePublisher,
            self._publisher,
            allow_none=True,
            allow_concrete=True,
        )

        # register id generator as factory
        self._container.bind_instance(
            BaseIDGenerator,
            self._id_generator,
            allow_concrete=True,
        )
        self._container.bind("generated_id_type", self._id_generator.id_type)
        # Allow async method also
        self._container.bind_factory(
            "generated_id",
            lambda: self._id_generator.generate(),
        )

        # Attach Thread name generator if provided
        if self.thread_name_generator is None:
            self.thread_name_generator = generate_dummy_thread_name

        generator = self.thread_name_generator or generate_dummy_thread_name

        self._container.bind_factory(
            "generated_thread_name",
            lambda: generator(),
        )

        # Save BackgroundTaskManager
        self._container.bind_instance(
            BackgroundTaskManager,
            self._task_manager,
            allow_concrete=False,
        )

    def add_node(
        self,
        name_or_func: str | Callable,
        func: Union[Callable, "ToolNode", None] = None,
    ) -> "StateGraph":
        """Add a node to the graph.

        This method supports two calling patterns:
        1. Pass a callable as the first argument (name inferred from function name)
        2. Pass a name string and callable/ToolNode as separate arguments

        Args:
            name_or_func: Either the node name (str) or a callable function.
                If callable, the function name will be used as the node name.
            func: The function or ToolNode to execute. Required if name_or_func
                is a string, ignored if name_or_func is callable.

        Returns:
            StateGraph: The graph instance for method chaining.

        Raises:
            ValueError: If invalid arguments are provided.

        Example:
            >>> # Method 1: Function name inferred
            >>> graph.add_node(my_function)
            >>> # Method 2: Explicit name and function
            >>> graph.add_node("process", my_function)
        """
        if callable(name_or_func) and func is None:
            # Function passed as first argument
            name = name_or_func.__name__
            func = name_or_func
            logger.debug("Adding node '%s' with inferred name from function", name)
        elif isinstance(name_or_func, str) and (callable(func) or isinstance(func, ToolNode)):
            # Name and function passed separately
            name = name_or_func
            logger.debug(
                "Adding node '%s' with explicit name and %s",
                name,
                "ToolNode" if isinstance(func, ToolNode) else "callable",
            )
        else:
            error_msg = "Invalid arguments for add_node"
            logger.error(error_msg)
            raise ValueError(error_msg)

        self.nodes[name] = Node(name, func)
        logger.info("Added node '%s' to graph (total nodes: %d)", name, len(self.nodes))
        return self

    def add_edge(
        self,
        from_node: str,
        to_node: str,
    ) -> "StateGraph":
        """Add a static edge between two nodes.

        Creates a direct connection from one node to another. If the source
        node is START, the target node becomes the entry point for the graph.

        Args:
            from_node: Name of the source node.
            to_node: Name of the target node.

        Returns:
            StateGraph: The graph instance for method chaining.

        Example:
            >>> graph.add_edge("node1", "node2")
            >>> graph.add_edge(START, "entry_node")  # Sets entry point
        """
        logger.debug("Adding edge from '%s' to '%s'", from_node, to_node)
        # Set entry point if edge is from START
        if from_node == START:
            self.entry_point = to_node
            logger.info("Set entry point to '%s'", to_node)
        self.edges.append(Edge(from_node, to_node))
        logger.debug("Added edge (total edges: %d)", len(self.edges))
        return self

    def add_conditional_edges(
        self,
        from_node: str,
        condition: Callable,
        path_map: dict[str, str] | None = None,
    ) -> "StateGraph":
        """Add conditional routing between nodes based on runtime evaluation.

        Creates dynamic routing logic where the next node is determined by evaluating
        a condition function against the current state. This enables complex branching
        logic, decision trees, and adaptive workflow routing.

        Args:
            from_node: Name of the source node where the condition is evaluated.
            condition: Callable function that takes the current AgentState and returns
                a value used for routing decisions. Should be deterministic and
                side-effect free.
            path_map: Optional dictionary mapping condition results to destination nodes.
                If provided, the condition's return value is looked up in this mapping.
                If None, the condition should return the destination node name directly.

        Returns:
            StateGraph: The graph instance for method chaining.

        Raises:
            ValueError: If the condition function or path_map configuration is invalid.

        Example:
            ```python
            # Direct routing - condition returns node name
            def route_by_priority(state):
                priority = state.data.get("priority", "normal")
                return "urgent_handler" if priority == "high" else "normal_handler"


            graph.add_conditional_edges("classifier", route_by_priority)


            # Mapped routing - condition result mapped to nodes
            def get_category(state):
                return state.data.get("category", "default")


            category_map = {
                "finance": "finance_processor",
                "legal": "legal_processor",
                "default": "general_processor",
            }
            graph.add_conditional_edges("categorizer", get_category, category_map)
            ```

        Note:
            The condition function receives the current AgentState and should return
            consistent results for the same state. If using path_map, ensure the
            condition's return values match the map keys exactly.
        """
        """Add conditional edges from a node based on a condition function.

        Creates edges that are traversed based on the result of a condition
        function. The condition function receives the current state and should
        return a value that determines which edge to follow.

        Args:
            from_node: Name of the source node.
            condition: Function that evaluates the current state and returns
                a value to determine the next node.
            path_map: Optional mapping from condition results to target nodes.
                If provided, creates multiple conditional edges. If None,
                creates a single conditional edge.

        Returns:
            StateGraph: The graph instance for method chaining.

        Example:
            >>> def route_condition(state):
            ...     return "success" if state.success else "failure"
            >>> graph.add_conditional_edges(
            ...     "processor",
            ...     route_condition,
            ...     {"success": "next_step", "failure": "error_handler"},
            ... )
        """
        # Create edges based on possible returns from condition function
        logger.debug(
            "Node '%s' adding conditional edges with path_map: %s",
            from_node,
            path_map,
        )
        if path_map:
            logger.debug(
                "Node '%s' adding conditional edges with path_map: %s", from_node, path_map
            )
            for condition_result, target_node in path_map.items():
                edge = Edge(from_node, target_node, condition)
                edge.condition_result = condition_result
                self.edges.append(edge)
        else:
            # Single conditional edge
            logger.debug("Node '%s' adding single conditional edge", from_node)
            self.edges.append(Edge(from_node, "", condition))
        return self

    def set_entry_point(self, node_name: str) -> "StateGraph":
        """Set the entry point for the graph."""
        self.entry_point = node_name
        self.add_edge(START, node_name)
        logger.info("Set entry point to '%s'", node_name)
        return self

    def compile(
        self,
        checkpointer: BaseCheckpointer[StateT] | None = None,
        store: BaseStore | None = None,
        interrupt_before: list[str] | None = None,
        interrupt_after: list[str] | None = None,
        callback_manager: CallbackManager = CallbackManager(),
    ) -> "CompiledGraph[StateT]":
        """Compile the graph for execution.

        Args:
            checkpointer: Checkpointer for state persistence
            store: Store for additional data
            debug: Enable debug mode
            interrupt_before: List of node names to interrupt before execution
            interrupt_after: List of node names to interrupt after execution
            callback_manager: Callback manager for executing hooks
        """
        logger.info(
            "Compiling graph with %d nodes, %d edges, entry_point='%s'",
            len(self.nodes),
            len(self.edges),
            self.entry_point,
        )
        logger.debug(
            "Compile options: interrupt_before=%s, interrupt_after=%s",
            interrupt_before,
            interrupt_after,
        )

        if not self.entry_point:
            error_msg = "No entry point set. Use set_entry_point() or add an edge from START."
            logger.error(error_msg)
            raise GraphError(error_msg)

        # Validate graph structure
        logger.debug("Validating graph structure")
        self._validate_graph()
        logger.debug("Graph structure validated successfully")

        # Validate interrupt node names
        interrupt_before = interrupt_before or []
        interrupt_after = interrupt_after or []

        all_interrupt_nodes = set(interrupt_before + interrupt_after)
        invalid_nodes = all_interrupt_nodes - set(self.nodes.keys())
        if invalid_nodes:
            error_msg = f"Invalid interrupt nodes: {invalid_nodes}. Must be existing node names."
            logger.error(error_msg)
            raise GraphError(error_msg)

        self.compiled = True
        logger.info("Graph compilation completed successfully")
        # Import here to avoid circular import at module import time
        # Now update Checkpointer
        if checkpointer is None:
            from pyagenity.checkpointer import InMemoryCheckpointer

            checkpointer = InMemoryCheckpointer[StateT]()
            logger.debug("No checkpointer provided, using InMemoryCheckpointer")

        # Import the CompiledGraph class
        from .compiled_graph import CompiledGraph

        # Setup dependencies
        self._container.bind_instance(
            BaseCheckpointer,
            checkpointer,
            allow_concrete=True,
        )  # not null as we set default
        self._container.bind_instance(
            BaseStore,
            store,
            allow_none=True,
            allow_concrete=True,
        )
        self._container.bind_instance(
            CallbackManager,
            callback_manager,
            allow_concrete=True,
        )  # not null as we set default
        self._container.bind("interrupt_before", interrupt_before)
        self._container.bind("interrupt_after", interrupt_after)
        self._container.bind_instance(StateGraph, self)

        app = CompiledGraph(
            state=self._state,
            interrupt_after=interrupt_after,
            interrupt_before=interrupt_before,
            state_graph=self,
            checkpointer=checkpointer,
            publisher=self._publisher,
            store=store,
            task_manager=self._task_manager,
        )

        self._container.bind(CompiledGraph, app)
        # Compile the Graph, so it will optimize the dependency graph
        self._container.compile()
        return app

    def _validate_graph(self):
        """Validate the graph structure."""
        # Check for orphaned nodes
        connected_nodes = set()
        for edge in self.edges:
            connected_nodes.add(edge.from_node)
            connected_nodes.add(edge.to_node)

        all_nodes = set(self.nodes.keys())
        orphaned = all_nodes - connected_nodes
        if orphaned - {START, END}:  # START and END can be orphaned
            logger.error("Orphaned nodes detected: %s", orphaned - {START, END})
            raise GraphError(f"Orphaned nodes detected: {orphaned - {START, END}}")

        # Check that all edge targets exist
        for edge in self.edges:
            if edge.to_node and edge.to_node not in self.nodes:
                logger.error("Edge '%s' targets non-existent node: %s", edge, edge.to_node)
                raise GraphError(f"Edge targets non-existent node: {edge.to_node}")

Attributes

edges instance-attribute
edges = []
entry_point instance-attribute
entry_point = None
nodes instance-attribute
nodes = {}
thread_name_generator instance-attribute
thread_name_generator = thread_name_generator

Functions

__init__
__init__(state=None, context_manager=None, publisher=None, id_generator=DefaultIDGenerator(), container=None, thread_name_generator=None)

Initialize a new StateGraph instance.

Parameters:

Name Type Description Default
state
StateT | None

Initial state for the graph. If None, a default AgentState will be created.

None
context_manager
BaseContextManager[StateT] | None

Optional context manager for handling cross-node state operations and advanced state management patterns.

None
dependency_container

Container for managing dependencies that can be injected into node functions. If None, a new empty container will be created.

required
publisher
BasePublisher | None

Publisher for emitting events during execution

None
Note

START and END nodes are automatically added to the graph upon initialization and accept the full node signature including dependencies.

Example
Basic usage with default AgentState

graph = StateGraph()

With custom state

custom_state = MyCustomState() graph = StateGraph(custom_state)

Or using type hints for clarity

graph = StateGraphMyCustomState

Source code in pyagenity/graph/state_graph.py
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
def __init__(
    self,
    state: StateT | None = None,
    context_manager: BaseContextManager[StateT] | None = None,
    publisher: BasePublisher | None = None,
    id_generator: BaseIDGenerator = DefaultIDGenerator(),
    container: InjectQ | None = None,
    thread_name_generator: Callable[[], str] | None = None,
):
    """Initialize a new StateGraph instance.

    Args:
        state: Initial state for the graph. If None, a default AgentState
            will be created.
        context_manager: Optional context manager for handling cross-node
            state operations and advanced state management patterns.
        dependency_container: Container for managing dependencies that can
            be injected into node functions. If None, a new empty container
            will be created.
        publisher: Publisher for emitting events during execution

    Note:
        START and END nodes are automatically added to the graph upon
        initialization and accept the full node signature including
        dependencies.

    Example:
        # Basic usage with default AgentState
        >>> graph = StateGraph()

        # With custom state
        >>> custom_state = MyCustomState()
        >>> graph = StateGraph(custom_state)

        # Or using type hints for clarity
        >>> graph = StateGraph[MyCustomState](MyCustomState())
    """
    logger.info("Initializing StateGraph")
    logger.debug(
        "StateGraph init with state=%s, context_manager=%s",
        type(state).__name__ if state else "default AgentState",
        type(context_manager).__name__ if context_manager else None,
    )

    # State handling
    self._state: StateT = state if state else AgentState()  # type: ignore[assignment]

    # Graph structure
    self.nodes: dict[str, Node] = {}
    self.edges: list[Edge] = []
    self.entry_point: str | None = None

    # Services
    self._publisher: BasePublisher | None = publisher
    self._id_generator: BaseIDGenerator = id_generator
    self._context_manager: BaseContextManager[StateT] | None = context_manager
    self.thread_name_generator = thread_name_generator
    # save container for dependency injection
    # if any container is passed then we will activate that
    # otherwise we can skip it and use the default one
    if container is None:
        self._container = InjectQ.get_instance()
        logger.debug("No container provided, using global singleton instance")
    else:
        logger.debug("Using provided dependency container instance")
        self._container = container
        self._container.activate()

    # Register task_manager, for async tasks
    # This will be used to run background tasks
    self._task_manager = BackgroundTaskManager()

    # now setup the graph
    self._setup()

    # Add START and END nodes (accept full node signature including dependencies)
    logger.debug("Adding default START and END nodes")
    self.nodes[START] = Node(START, lambda state, config, **deps: state, self._publisher)  # type: ignore
    self.nodes[END] = Node(END, lambda state, config, **deps: state, self._publisher)
    logger.debug("StateGraph initialized with %d nodes", len(self.nodes))
add_conditional_edges
add_conditional_edges(from_node, condition, path_map=None)

Add conditional routing between nodes based on runtime evaluation.

Creates dynamic routing logic where the next node is determined by evaluating a condition function against the current state. This enables complex branching logic, decision trees, and adaptive workflow routing.

Parameters:

Name Type Description Default
from_node
str

Name of the source node where the condition is evaluated.

required
condition
Callable

Callable function that takes the current AgentState and returns a value used for routing decisions. Should be deterministic and side-effect free.

required
path_map
dict[str, str] | None

Optional dictionary mapping condition results to destination nodes. If provided, the condition's return value is looked up in this mapping. If None, the condition should return the destination node name directly.

None

Returns:

Name Type Description
StateGraph StateGraph

The graph instance for method chaining.

Raises:

Type Description
ValueError

If the condition function or path_map configuration is invalid.

Example
# Direct routing - condition returns node name
def route_by_priority(state):
    priority = state.data.get("priority", "normal")
    return "urgent_handler" if priority == "high" else "normal_handler"


graph.add_conditional_edges("classifier", route_by_priority)


# Mapped routing - condition result mapped to nodes
def get_category(state):
    return state.data.get("category", "default")


category_map = {
    "finance": "finance_processor",
    "legal": "legal_processor",
    "default": "general_processor",
}
graph.add_conditional_edges("categorizer", get_category, category_map)
Note

The condition function receives the current AgentState and should return consistent results for the same state. If using path_map, ensure the condition's return values match the map keys exactly.

Source code in pyagenity/graph/state_graph.py
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
def add_conditional_edges(
    self,
    from_node: str,
    condition: Callable,
    path_map: dict[str, str] | None = None,
) -> "StateGraph":
    """Add conditional routing between nodes based on runtime evaluation.

    Creates dynamic routing logic where the next node is determined by evaluating
    a condition function against the current state. This enables complex branching
    logic, decision trees, and adaptive workflow routing.

    Args:
        from_node: Name of the source node where the condition is evaluated.
        condition: Callable function that takes the current AgentState and returns
            a value used for routing decisions. Should be deterministic and
            side-effect free.
        path_map: Optional dictionary mapping condition results to destination nodes.
            If provided, the condition's return value is looked up in this mapping.
            If None, the condition should return the destination node name directly.

    Returns:
        StateGraph: The graph instance for method chaining.

    Raises:
        ValueError: If the condition function or path_map configuration is invalid.

    Example:
        ```python
        # Direct routing - condition returns node name
        def route_by_priority(state):
            priority = state.data.get("priority", "normal")
            return "urgent_handler" if priority == "high" else "normal_handler"


        graph.add_conditional_edges("classifier", route_by_priority)


        # Mapped routing - condition result mapped to nodes
        def get_category(state):
            return state.data.get("category", "default")


        category_map = {
            "finance": "finance_processor",
            "legal": "legal_processor",
            "default": "general_processor",
        }
        graph.add_conditional_edges("categorizer", get_category, category_map)
        ```

    Note:
        The condition function receives the current AgentState and should return
        consistent results for the same state. If using path_map, ensure the
        condition's return values match the map keys exactly.
    """
    """Add conditional edges from a node based on a condition function.

    Creates edges that are traversed based on the result of a condition
    function. The condition function receives the current state and should
    return a value that determines which edge to follow.

    Args:
        from_node: Name of the source node.
        condition: Function that evaluates the current state and returns
            a value to determine the next node.
        path_map: Optional mapping from condition results to target nodes.
            If provided, creates multiple conditional edges. If None,
            creates a single conditional edge.

    Returns:
        StateGraph: The graph instance for method chaining.

    Example:
        >>> def route_condition(state):
        ...     return "success" if state.success else "failure"
        >>> graph.add_conditional_edges(
        ...     "processor",
        ...     route_condition,
        ...     {"success": "next_step", "failure": "error_handler"},
        ... )
    """
    # Create edges based on possible returns from condition function
    logger.debug(
        "Node '%s' adding conditional edges with path_map: %s",
        from_node,
        path_map,
    )
    if path_map:
        logger.debug(
            "Node '%s' adding conditional edges with path_map: %s", from_node, path_map
        )
        for condition_result, target_node in path_map.items():
            edge = Edge(from_node, target_node, condition)
            edge.condition_result = condition_result
            self.edges.append(edge)
    else:
        # Single conditional edge
        logger.debug("Node '%s' adding single conditional edge", from_node)
        self.edges.append(Edge(from_node, "", condition))
    return self
add_edge
add_edge(from_node, to_node)

Add a static edge between two nodes.

Creates a direct connection from one node to another. If the source node is START, the target node becomes the entry point for the graph.

Parameters:

Name Type Description Default
from_node
str

Name of the source node.

required
to_node
str

Name of the target node.

required

Returns:

Name Type Description
StateGraph StateGraph

The graph instance for method chaining.

Example

graph.add_edge("node1", "node2") graph.add_edge(START, "entry_node") # Sets entry point

Source code in pyagenity/graph/state_graph.py
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
def add_edge(
    self,
    from_node: str,
    to_node: str,
) -> "StateGraph":
    """Add a static edge between two nodes.

    Creates a direct connection from one node to another. If the source
    node is START, the target node becomes the entry point for the graph.

    Args:
        from_node: Name of the source node.
        to_node: Name of the target node.

    Returns:
        StateGraph: The graph instance for method chaining.

    Example:
        >>> graph.add_edge("node1", "node2")
        >>> graph.add_edge(START, "entry_node")  # Sets entry point
    """
    logger.debug("Adding edge from '%s' to '%s'", from_node, to_node)
    # Set entry point if edge is from START
    if from_node == START:
        self.entry_point = to_node
        logger.info("Set entry point to '%s'", to_node)
    self.edges.append(Edge(from_node, to_node))
    logger.debug("Added edge (total edges: %d)", len(self.edges))
    return self
add_node
add_node(name_or_func, func=None)

Add a node to the graph.

This method supports two calling patterns: 1. Pass a callable as the first argument (name inferred from function name) 2. Pass a name string and callable/ToolNode as separate arguments

Parameters:

Name Type Description Default
name_or_func
str | Callable

Either the node name (str) or a callable function. If callable, the function name will be used as the node name.

required
func
Union[Callable, ToolNode, None]

The function or ToolNode to execute. Required if name_or_func is a string, ignored if name_or_func is callable.

None

Returns:

Name Type Description
StateGraph StateGraph

The graph instance for method chaining.

Raises:

Type Description
ValueError

If invalid arguments are provided.

Example
Method 1: Function name inferred

graph.add_node(my_function)

Method 2: Explicit name and function

graph.add_node("process", my_function)

Source code in pyagenity/graph/state_graph.py
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
def add_node(
    self,
    name_or_func: str | Callable,
    func: Union[Callable, "ToolNode", None] = None,
) -> "StateGraph":
    """Add a node to the graph.

    This method supports two calling patterns:
    1. Pass a callable as the first argument (name inferred from function name)
    2. Pass a name string and callable/ToolNode as separate arguments

    Args:
        name_or_func: Either the node name (str) or a callable function.
            If callable, the function name will be used as the node name.
        func: The function or ToolNode to execute. Required if name_or_func
            is a string, ignored if name_or_func is callable.

    Returns:
        StateGraph: The graph instance for method chaining.

    Raises:
        ValueError: If invalid arguments are provided.

    Example:
        >>> # Method 1: Function name inferred
        >>> graph.add_node(my_function)
        >>> # Method 2: Explicit name and function
        >>> graph.add_node("process", my_function)
    """
    if callable(name_or_func) and func is None:
        # Function passed as first argument
        name = name_or_func.__name__
        func = name_or_func
        logger.debug("Adding node '%s' with inferred name from function", name)
    elif isinstance(name_or_func, str) and (callable(func) or isinstance(func, ToolNode)):
        # Name and function passed separately
        name = name_or_func
        logger.debug(
            "Adding node '%s' with explicit name and %s",
            name,
            "ToolNode" if isinstance(func, ToolNode) else "callable",
        )
    else:
        error_msg = "Invalid arguments for add_node"
        logger.error(error_msg)
        raise ValueError(error_msg)

    self.nodes[name] = Node(name, func)
    logger.info("Added node '%s' to graph (total nodes: %d)", name, len(self.nodes))
    return self
compile
compile(checkpointer=None, store=None, interrupt_before=None, interrupt_after=None, callback_manager=CallbackManager())

Compile the graph for execution.

Parameters:

Name Type Description Default
checkpointer
BaseCheckpointer[StateT] | None

Checkpointer for state persistence

None
store
BaseStore | None

Store for additional data

None
debug

Enable debug mode

required
interrupt_before
list[str] | None

List of node names to interrupt before execution

None
interrupt_after
list[str] | None

List of node names to interrupt after execution

None
callback_manager
CallbackManager

Callback manager for executing hooks

CallbackManager()
Source code in pyagenity/graph/state_graph.py
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
def compile(
    self,
    checkpointer: BaseCheckpointer[StateT] | None = None,
    store: BaseStore | None = None,
    interrupt_before: list[str] | None = None,
    interrupt_after: list[str] | None = None,
    callback_manager: CallbackManager = CallbackManager(),
) -> "CompiledGraph[StateT]":
    """Compile the graph for execution.

    Args:
        checkpointer: Checkpointer for state persistence
        store: Store for additional data
        debug: Enable debug mode
        interrupt_before: List of node names to interrupt before execution
        interrupt_after: List of node names to interrupt after execution
        callback_manager: Callback manager for executing hooks
    """
    logger.info(
        "Compiling graph with %d nodes, %d edges, entry_point='%s'",
        len(self.nodes),
        len(self.edges),
        self.entry_point,
    )
    logger.debug(
        "Compile options: interrupt_before=%s, interrupt_after=%s",
        interrupt_before,
        interrupt_after,
    )

    if not self.entry_point:
        error_msg = "No entry point set. Use set_entry_point() or add an edge from START."
        logger.error(error_msg)
        raise GraphError(error_msg)

    # Validate graph structure
    logger.debug("Validating graph structure")
    self._validate_graph()
    logger.debug("Graph structure validated successfully")

    # Validate interrupt node names
    interrupt_before = interrupt_before or []
    interrupt_after = interrupt_after or []

    all_interrupt_nodes = set(interrupt_before + interrupt_after)
    invalid_nodes = all_interrupt_nodes - set(self.nodes.keys())
    if invalid_nodes:
        error_msg = f"Invalid interrupt nodes: {invalid_nodes}. Must be existing node names."
        logger.error(error_msg)
        raise GraphError(error_msg)

    self.compiled = True
    logger.info("Graph compilation completed successfully")
    # Import here to avoid circular import at module import time
    # Now update Checkpointer
    if checkpointer is None:
        from pyagenity.checkpointer import InMemoryCheckpointer

        checkpointer = InMemoryCheckpointer[StateT]()
        logger.debug("No checkpointer provided, using InMemoryCheckpointer")

    # Import the CompiledGraph class
    from .compiled_graph import CompiledGraph

    # Setup dependencies
    self._container.bind_instance(
        BaseCheckpointer,
        checkpointer,
        allow_concrete=True,
    )  # not null as we set default
    self._container.bind_instance(
        BaseStore,
        store,
        allow_none=True,
        allow_concrete=True,
    )
    self._container.bind_instance(
        CallbackManager,
        callback_manager,
        allow_concrete=True,
    )  # not null as we set default
    self._container.bind("interrupt_before", interrupt_before)
    self._container.bind("interrupt_after", interrupt_after)
    self._container.bind_instance(StateGraph, self)

    app = CompiledGraph(
        state=self._state,
        interrupt_after=interrupt_after,
        interrupt_before=interrupt_before,
        state_graph=self,
        checkpointer=checkpointer,
        publisher=self._publisher,
        store=store,
        task_manager=self._task_manager,
    )

    self._container.bind(CompiledGraph, app)
    # Compile the Graph, so it will optimize the dependency graph
    self._container.compile()
    return app
set_entry_point
set_entry_point(node_name)

Set the entry point for the graph.

Source code in pyagenity/graph/state_graph.py
381
382
383
384
385
386
def set_entry_point(self, node_name: str) -> "StateGraph":
    """Set the entry point for the graph."""
    self.entry_point = node_name
    self.add_edge(START, node_name)
    logger.info("Set entry point to '%s'", node_name)
    return self

ToolNode

Bases: SchemaMixin, LocalExecMixin, MCPMixin, ComposioMixin, LangChainMixin, KwargsResolverMixin

A unified registry and executor for callable functions from various tool providers.

ToolNode serves as the central hub for managing and executing tools from multiple sources: - Local Python functions - MCP (Model Context Protocol) tools - Composio adapter tools - LangChain tools

The class uses a mixin-based architecture to separate concerns and maintain clean integration with different tool providers. It provides both synchronous and asynchronous execution methods with comprehensive event publishing and error handling.

Attributes:

Name Type Description
_funcs dict[str, Callable]

Dictionary mapping function names to callable functions.

_client Client | None

Optional MCP client for remote tool execution.

_composio ComposioAdapter | None

Optional Composio adapter for external integrations.

_langchain Any | None

Optional LangChain adapter for LangChain tools.

mcp_tools list[str]

List of available MCP tool names.

composio_tools list[str]

List of available Composio tool names.

langchain_tools list[str]

List of available LangChain tool names.

Example
# Define local tools
def weather_tool(location: str) -> str:
    return f"Weather in {location}: Sunny, 25°C"


def calculator(a: int, b: int) -> int:
    return a + b


# Create ToolNode with local functions
tools = ToolNode([weather_tool, calculator])

# Execute a tool
result = await tools.invoke(
    name="weather_tool",
    args={"location": "New York"},
    tool_call_id="call_123",
    config={"user_id": "user1"},
    state=agent_state,
)

Methods:

Name Description
__init__

Initialize ToolNode with functions and optional tool adapters.

all_tools

Get all available tools from all configured providers.

all_tools_sync

Synchronously get all available tools from all configured providers.

get_local_tool

Generate OpenAI-compatible tool definitions for all registered local functions.

invoke

Execute a specific tool by name with the provided arguments.

stream

Execute a tool with streaming support, yielding incremental results.

Source code in pyagenity/graph/tool_node/base.py
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
class ToolNode(
    SchemaMixin,
    LocalExecMixin,
    MCPMixin,
    ComposioMixin,
    LangChainMixin,
    KwargsResolverMixin,
):
    """A unified registry and executor for callable functions from various tool providers.

    ToolNode serves as the central hub for managing and executing tools from multiple sources:
    - Local Python functions
    - MCP (Model Context Protocol) tools
    - Composio adapter tools
    - LangChain tools

    The class uses a mixin-based architecture to separate concerns and maintain clean
    integration with different tool providers. It provides both synchronous and asynchronous
    execution methods with comprehensive event publishing and error handling.

    Attributes:
        _funcs: Dictionary mapping function names to callable functions.
        _client: Optional MCP client for remote tool execution.
        _composio: Optional Composio adapter for external integrations.
        _langchain: Optional LangChain adapter for LangChain tools.
        mcp_tools: List of available MCP tool names.
        composio_tools: List of available Composio tool names.
        langchain_tools: List of available LangChain tool names.

    Example:
        ```python
        # Define local tools
        def weather_tool(location: str) -> str:
            return f"Weather in {location}: Sunny, 25°C"


        def calculator(a: int, b: int) -> int:
            return a + b


        # Create ToolNode with local functions
        tools = ToolNode([weather_tool, calculator])

        # Execute a tool
        result = await tools.invoke(
            name="weather_tool",
            args={"location": "New York"},
            tool_call_id="call_123",
            config={"user_id": "user1"},
            state=agent_state,
        )
        ```
    """

    def __init__(
        self,
        functions: t.Iterable[t.Callable],
        client: deps.Client | None = None,  # type: ignore
        composio_adapter: ComposioAdapter | None = None,
        langchain_adapter: t.Any | None = None,
    ) -> None:
        """Initialize ToolNode with functions and optional tool adapters.

        Args:
            functions: Iterable of callable functions to register as tools. Each function
                will be registered with its `__name__` as the tool identifier.
            client: Optional MCP (Model Context Protocol) client for remote tool access.
                Requires 'fastmcp' and 'mcp' packages to be installed.
            composio_adapter: Optional Composio adapter for external integrations and
                third-party API access.
            langchain_adapter: Optional LangChain adapter for accessing LangChain tools
                and integrations.

        Raises:
            ImportError: If MCP client is provided but required packages are not installed.
            TypeError: If any item in functions is not callable.

        Note:
            When using MCP client functionality, ensure you have installed the required
            dependencies with: `pip install pyagenity[mcp]`
        """
        logger.info("Initializing ToolNode with %d functions", len(list(functions)))

        if client is not None:
            # Read flags dynamically so tests can patch pyagenity.graph.tool_node.HAS_*
            mod = sys.modules.get("pyagenity.graph.tool_node")
            has_fastmcp = getattr(mod, "HAS_FASTMCP", deps.HAS_FASTMCP) if mod else deps.HAS_FASTMCP
            has_mcp = getattr(mod, "HAS_MCP", deps.HAS_MCP) if mod else deps.HAS_MCP

            if not has_fastmcp or not has_mcp:
                raise ImportError(
                    "MCP client functionality requires 'fastmcp' and 'mcp' packages. "
                    "Install with: pip install pyagenity[mcp]"
                )
            logger.debug("ToolNode initialized with MCP client")

        self._funcs: dict[str, t.Callable] = {}
        self._client: deps.Client | None = client  # type: ignore
        self._composio: ComposioAdapter | None = composio_adapter
        self._langchain: t.Any | None = langchain_adapter

        for fn in functions:
            if not callable(fn):
                raise TypeError("ToolNode only accepts callables")
            self._funcs[fn.__name__] = fn

        self.mcp_tools: list[str] = []
        self.composio_tools: list[str] = []
        self.langchain_tools: list[str] = []

    async def _all_tools_async(self) -> list[dict]:
        tools: list[dict] = self.get_local_tool()
        tools.extend(await self._get_mcp_tool())
        tools.extend(await self._get_composio_tools())
        tools.extend(await self._get_langchain_tools())
        return tools

    async def all_tools(self) -> list[dict]:
        """Get all available tools from all configured providers.

        Retrieves and combines tool definitions from local functions, MCP client,
        Composio adapter, and LangChain adapter. Each tool definition includes
        the function schema with parameters and descriptions.

        Returns:
            List of tool definitions in OpenAI function calling format. Each dict
            contains 'type': 'function' and 'function' with name, description,
            and parameters schema.

        Example:
            ```python
            tools = await tool_node.all_tools()
            # Returns:
            # [
            #   {
            #     "type": "function",
            #     "function": {
            #       "name": "weather_tool",
            #       "description": "Get weather information for a location",
            #       "parameters": {
            #         "type": "object",
            #         "properties": {
            #           "location": {"type": "string"}
            #         },
            #         "required": ["location"]
            #       }
            #     }
            #   }
            # ]
            ```
        """
        return await self._all_tools_async()

    def all_tools_sync(self) -> list[dict]:
        """Synchronously get all available tools from all configured providers.

        This is a synchronous wrapper around the async all_tools() method.
        It uses asyncio.run() to handle async operations from MCP, Composio,
        and LangChain adapters.

        Returns:
            List of tool definitions in OpenAI function calling format.

        Note:
            Prefer using the async `all_tools()` method when possible, especially
            in async contexts, to avoid potential event loop issues.
        """
        tools: list[dict] = self.get_local_tool()
        if self._client:
            result = asyncio.run(self._get_mcp_tool())
            if result:
                tools.extend(result)
        comp = asyncio.run(self._get_composio_tools())
        if comp:
            tools.extend(comp)
        lc = asyncio.run(self._get_langchain_tools())
        if lc:
            tools.extend(lc)
        return tools

    async def invoke(  # noqa: PLR0915
        self,
        name: str,
        args: dict,
        tool_call_id: str,
        config: dict[str, t.Any],
        state: AgentState,
        callback_manager: CallbackManager = Inject[CallbackManager],
    ) -> t.Any:
        """Execute a specific tool by name with the provided arguments.

        This method handles tool execution across all configured providers (local,
        MCP, Composio, LangChain) with comprehensive error handling, event publishing,
        and callback management.

        Args:
            name: The name of the tool to execute.
            args: Dictionary of arguments to pass to the tool function.
            tool_call_id: Unique identifier for this tool execution, used for
                tracking and result correlation.
            config: Configuration dictionary containing execution context and
                user-specific settings.
            state: Current agent state for context-aware tool execution.
            callback_manager: Manager for executing pre/post execution callbacks.
                Injected via dependency injection if not provided.

        Returns:
            Message object containing tool execution results, either successful
            output or error information with appropriate status indicators.

        Raises:
            The method handles all exceptions internally and returns error Messages
            rather than raising exceptions, ensuring robust execution flow.

        Example:
            ```python
            result = await tool_node.invoke(
                name="weather_tool",
                args={"location": "Paris", "units": "metric"},
                tool_call_id="call_abc123",
                config={"user_id": "user1", "session_id": "session1"},
                state=current_agent_state,
            )

            # result is a Message with tool execution results
            print(result.content)  # Tool output or error information
            ```

        Note:
            The method publishes execution events throughout the process for
            monitoring and debugging purposes. Tool execution is routed based
            on tool provider precedence: MCP → Composio → LangChain → Local.
        """
        logger.info("Executing tool '%s' with %d arguments", name, len(args))
        logger.debug("Tool arguments: %s", args)

        event = EventModel.default(
            config,
            data={"args": args, "tool_call_id": tool_call_id, "function_name": name},
            content_type=[ContentType.TOOL_CALL],
            event=Event.TOOL_EXECUTION,
        )
        event.node_name = name
        # Attach structured tool call block
        with contextlib.suppress(Exception):
            event.content_blocks = [ToolCallBlock(id=tool_call_id, name=name, args=args)]
        publish_event(event)

        if name in self.mcp_tools:
            event.metadata["is_mcp"] = True
            publish_event(event)
            res = await self._mcp_execute(
                name,
                args,
                tool_call_id,
                config,
                callback_manager,
            )
            event.data["message"] = res.model_dump()
            # Attach tool result block mirroring the tool output
            with contextlib.suppress(Exception):
                event.content_blocks = [
                    ToolResultBlock(call_id=tool_call_id, output=res.model_dump())
                ]
            event.event_type = EventType.END
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            return res

        if name in self.composio_tools:
            event.metadata["is_composio"] = True
            publish_event(event)
            res = await self._composio_execute(
                name,
                args,
                tool_call_id,
                config,
                callback_manager,
            )
            event.data["message"] = res.model_dump()
            with contextlib.suppress(Exception):
                event.content_blocks = [
                    ToolResultBlock(call_id=tool_call_id, output=res.model_dump())
                ]
            event.event_type = EventType.END
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            return res

        if name in self.langchain_tools:
            event.metadata["is_langchain"] = True
            publish_event(event)
            res = await self._langchain_execute(
                name,
                args,
                tool_call_id,
                config,
                callback_manager,
            )
            event.data["message"] = res.model_dump()
            with contextlib.suppress(Exception):
                event.content_blocks = [
                    ToolResultBlock(call_id=tool_call_id, output=res.model_dump())
                ]
            event.event_type = EventType.END
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            return res

        if name in self._funcs:
            event.metadata["is_mcp"] = False
            publish_event(event)
            res = await self._internal_execute(
                name,
                args,
                tool_call_id,
                config,
                state,
                callback_manager,
            )
            event.data["message"] = res.model_dump()
            with contextlib.suppress(Exception):
                event.content_blocks = [
                    ToolResultBlock(call_id=tool_call_id, output=res.model_dump())
                ]
            event.event_type = EventType.END
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            return res

        error_msg = f"Tool '{name}' not found."
        event.data["error"] = error_msg
        event.event_type = EventType.ERROR
        event.content_type = [ContentType.TOOL_RESULT, ContentType.ERROR]
        publish_event(event)
        return Message.tool_message(
            content=[
                ErrorBlock(message=error_msg),
                ToolResultBlock(
                    call_id=tool_call_id,
                    output=error_msg,
                    is_error=True,
                    status="failed",
                ),
            ],
        )

    async def stream(  # noqa: PLR0915
        self,
        name: str,
        args: dict,
        tool_call_id: str,
        config: dict[str, t.Any],
        state: AgentState,
        callback_manager: CallbackManager = Inject[CallbackManager],
    ) -> t.AsyncIterator[Message]:
        """Execute a tool with streaming support, yielding incremental results.

        Similar to invoke() but designed for tools that can provide streaming responses
        or when you want to process results as they become available. Currently,
        most tool providers return complete results, so this method typically yields
        a single Message with the full result.

        Args:
            name: The name of the tool to execute.
            args: Dictionary of arguments to pass to the tool function.
            tool_call_id: Unique identifier for this tool execution.
            config: Configuration dictionary containing execution context.
            state: Current agent state for context-aware tool execution.
            callback_manager: Manager for executing pre/post execution callbacks.

        Yields:
            Message objects containing tool execution results or status updates.
            For most tools, this will yield a single complete result Message.

        Example:
            ```python
            async for message in tool_node.stream(
                name="data_processor",
                args={"dataset": "large_data.csv"},
                tool_call_id="call_stream123",
                config={"user_id": "user1"},
                state=current_state,
            ):
                print(f"Received: {message.content}")
                # Process each streamed result
            ```

        Note:
            The streaming interface is designed for future expansion where tools
            may provide true streaming responses. Currently, it provides a
            consistent async iterator interface over tool results.
        """
        logger.info("Executing tool '%s' with %d arguments", name, len(args))
        logger.debug("Tool arguments: %s", args)
        event = EventModel.default(
            config,
            data={"args": args, "tool_call_id": tool_call_id, "function_name": name},
            content_type=[ContentType.TOOL_CALL],
            event=Event.TOOL_EXECUTION,
        )
        event.node_name = "ToolNode"
        with contextlib.suppress(Exception):
            event.content_blocks = [ToolCallBlock(id=tool_call_id, name=name, args=args)]

        if name in self.mcp_tools:
            event.metadata["function_type"] = "mcp"
            publish_event(event)
            message = await self._mcp_execute(
                name,
                args,
                tool_call_id,
                config,
                callback_manager,
            )
            event.data["message"] = message.model_dump()
            with contextlib.suppress(Exception):
                event.content_blocks = [
                    ToolResultBlock(call_id=tool_call_id, output=message.model_dump())
                ]
            event.event_type = EventType.END
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            yield message
            return

        if name in self.composio_tools:
            event.metadata["function_type"] = "composio"
            publish_event(event)
            message = await self._composio_execute(
                name,
                args,
                tool_call_id,
                config,
                callback_manager,
            )
            event.data["message"] = message.model_dump()
            with contextlib.suppress(Exception):
                event.content_blocks = [
                    ToolResultBlock(call_id=tool_call_id, output=message.model_dump())
                ]
            event.event_type = EventType.END
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            yield message
            return

        if name in self.langchain_tools:
            event.metadata["function_type"] = "langchain"
            publish_event(event)
            message = await self._langchain_execute(
                name,
                args,
                tool_call_id,
                config,
                callback_manager,
            )
            event.data["message"] = message.model_dump()
            with contextlib.suppress(Exception):
                event.content_blocks = [
                    ToolResultBlock(call_id=tool_call_id, output=message.model_dump())
                ]
            event.event_type = EventType.END
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            yield message
            return

        if name in self._funcs:
            event.metadata["function_type"] = "internal"
            publish_event(event)

            result = await self._internal_execute(
                name,
                args,
                tool_call_id,
                config,
                state,
                callback_manager,
            )
            event.data["message"] = result.model_dump()
            with contextlib.suppress(Exception):
                event.content_blocks = [
                    ToolResultBlock(call_id=tool_call_id, output=result.model_dump())
                ]
            event.event_type = EventType.END
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            yield result
            return

        error_msg = f"Tool '{name}' not found."
        event.data["error"] = error_msg
        event.event_type = EventType.ERROR
        event.content_type = [ContentType.TOOL_RESULT, ContentType.ERROR]
        publish_event(event)

        yield Message.tool_message(
            content=[
                ErrorBlock(message=error_msg),
                ToolResultBlock(
                    call_id=tool_call_id,
                    output=error_msg,
                    is_error=True,
                    status="failed",
                ),
            ],
        )

Attributes

composio_tools instance-attribute
composio_tools = []
langchain_tools instance-attribute
langchain_tools = []
mcp_tools instance-attribute
mcp_tools = []

Functions

__init__
__init__(functions, client=None, composio_adapter=None, langchain_adapter=None)

Initialize ToolNode with functions and optional tool adapters.

Parameters:

Name Type Description Default
functions
Iterable[Callable]

Iterable of callable functions to register as tools. Each function will be registered with its __name__ as the tool identifier.

required
client
Client | None

Optional MCP (Model Context Protocol) client for remote tool access. Requires 'fastmcp' and 'mcp' packages to be installed.

None
composio_adapter
ComposioAdapter | None

Optional Composio adapter for external integrations and third-party API access.

None
langchain_adapter
Any | None

Optional LangChain adapter for accessing LangChain tools and integrations.

None

Raises:

Type Description
ImportError

If MCP client is provided but required packages are not installed.

TypeError

If any item in functions is not callable.

Note

When using MCP client functionality, ensure you have installed the required dependencies with: pip install pyagenity[mcp]

Source code in pyagenity/graph/tool_node/base.py
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
def __init__(
    self,
    functions: t.Iterable[t.Callable],
    client: deps.Client | None = None,  # type: ignore
    composio_adapter: ComposioAdapter | None = None,
    langchain_adapter: t.Any | None = None,
) -> None:
    """Initialize ToolNode with functions and optional tool adapters.

    Args:
        functions: Iterable of callable functions to register as tools. Each function
            will be registered with its `__name__` as the tool identifier.
        client: Optional MCP (Model Context Protocol) client for remote tool access.
            Requires 'fastmcp' and 'mcp' packages to be installed.
        composio_adapter: Optional Composio adapter for external integrations and
            third-party API access.
        langchain_adapter: Optional LangChain adapter for accessing LangChain tools
            and integrations.

    Raises:
        ImportError: If MCP client is provided but required packages are not installed.
        TypeError: If any item in functions is not callable.

    Note:
        When using MCP client functionality, ensure you have installed the required
        dependencies with: `pip install pyagenity[mcp]`
    """
    logger.info("Initializing ToolNode with %d functions", len(list(functions)))

    if client is not None:
        # Read flags dynamically so tests can patch pyagenity.graph.tool_node.HAS_*
        mod = sys.modules.get("pyagenity.graph.tool_node")
        has_fastmcp = getattr(mod, "HAS_FASTMCP", deps.HAS_FASTMCP) if mod else deps.HAS_FASTMCP
        has_mcp = getattr(mod, "HAS_MCP", deps.HAS_MCP) if mod else deps.HAS_MCP

        if not has_fastmcp or not has_mcp:
            raise ImportError(
                "MCP client functionality requires 'fastmcp' and 'mcp' packages. "
                "Install with: pip install pyagenity[mcp]"
            )
        logger.debug("ToolNode initialized with MCP client")

    self._funcs: dict[str, t.Callable] = {}
    self._client: deps.Client | None = client  # type: ignore
    self._composio: ComposioAdapter | None = composio_adapter
    self._langchain: t.Any | None = langchain_adapter

    for fn in functions:
        if not callable(fn):
            raise TypeError("ToolNode only accepts callables")
        self._funcs[fn.__name__] = fn

    self.mcp_tools: list[str] = []
    self.composio_tools: list[str] = []
    self.langchain_tools: list[str] = []
all_tools async
all_tools()

Get all available tools from all configured providers.

Retrieves and combines tool definitions from local functions, MCP client, Composio adapter, and LangChain adapter. Each tool definition includes the function schema with parameters and descriptions.

Returns:

Type Description
list[dict]

List of tool definitions in OpenAI function calling format. Each dict

list[dict]

contains 'type': 'function' and 'function' with name, description,

list[dict]

and parameters schema.

Example
tools = await tool_node.all_tools()
# Returns:
# [
#   {
#     "type": "function",
#     "function": {
#       "name": "weather_tool",
#       "description": "Get weather information for a location",
#       "parameters": {
#         "type": "object",
#         "properties": {
#           "location": {"type": "string"}
#         },
#         "required": ["location"]
#       }
#     }
#   }
# ]
Source code in pyagenity/graph/tool_node/base.py
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
async def all_tools(self) -> list[dict]:
    """Get all available tools from all configured providers.

    Retrieves and combines tool definitions from local functions, MCP client,
    Composio adapter, and LangChain adapter. Each tool definition includes
    the function schema with parameters and descriptions.

    Returns:
        List of tool definitions in OpenAI function calling format. Each dict
        contains 'type': 'function' and 'function' with name, description,
        and parameters schema.

    Example:
        ```python
        tools = await tool_node.all_tools()
        # Returns:
        # [
        #   {
        #     "type": "function",
        #     "function": {
        #       "name": "weather_tool",
        #       "description": "Get weather information for a location",
        #       "parameters": {
        #         "type": "object",
        #         "properties": {
        #           "location": {"type": "string"}
        #         },
        #         "required": ["location"]
        #       }
        #     }
        #   }
        # ]
        ```
    """
    return await self._all_tools_async()
all_tools_sync
all_tools_sync()

Synchronously get all available tools from all configured providers.

This is a synchronous wrapper around the async all_tools() method. It uses asyncio.run() to handle async operations from MCP, Composio, and LangChain adapters.

Returns:

Type Description
list[dict]

List of tool definitions in OpenAI function calling format.

Note

Prefer using the async all_tools() method when possible, especially in async contexts, to avoid potential event loop issues.

Source code in pyagenity/graph/tool_node/base.py
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
def all_tools_sync(self) -> list[dict]:
    """Synchronously get all available tools from all configured providers.

    This is a synchronous wrapper around the async all_tools() method.
    It uses asyncio.run() to handle async operations from MCP, Composio,
    and LangChain adapters.

    Returns:
        List of tool definitions in OpenAI function calling format.

    Note:
        Prefer using the async `all_tools()` method when possible, especially
        in async contexts, to avoid potential event loop issues.
    """
    tools: list[dict] = self.get_local_tool()
    if self._client:
        result = asyncio.run(self._get_mcp_tool())
        if result:
            tools.extend(result)
    comp = asyncio.run(self._get_composio_tools())
    if comp:
        tools.extend(comp)
    lc = asyncio.run(self._get_langchain_tools())
    if lc:
        tools.extend(lc)
    return tools
get_local_tool
get_local_tool()

Generate OpenAI-compatible tool definitions for all registered local functions.

Inspects all registered functions in _funcs and automatically generates tool schemas by analyzing function signatures, type annotations, and docstrings. Excludes injectable parameters that are provided by the framework.

Returns:

Type Description
list[dict]

List of tool definitions in OpenAI function calling format. Each

list[dict]

definition includes the function name, description (from docstring),

list[dict]

and complete parameter schema with types and required fields.

Example

For a function:

def calculate(a: int, b: int, operation: str = "add") -> int:
    '''Perform arithmetic calculation.'''
    return a + b if operation == "add" else a - b

Returns:

[
    {
        "type": "function",
        "function": {
            "name": "calculate",
            "description": "Perform arithmetic calculation.",
            "parameters": {
                "type": "object",
                "properties": {
                    "a": {"type": "integer"},
                    "b": {"type": "integer"},
                    "operation": {"type": "string", "default": "add"},
                },
                "required": ["a", "b"],
            },
        },
    }
]

Note

Parameters listed in INJECTABLE_PARAMS (like 'state', 'config', 'tool_call_id') are automatically excluded from the generated schema as they are provided by the framework during execution.

Source code in pyagenity/graph/tool_node/schema.py
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
def get_local_tool(self) -> list[dict]:
    """Generate OpenAI-compatible tool definitions for all registered local functions.

    Inspects all registered functions in _funcs and automatically generates
    tool schemas by analyzing function signatures, type annotations, and docstrings.
    Excludes injectable parameters that are provided by the framework.

    Returns:
        List of tool definitions in OpenAI function calling format. Each
        definition includes the function name, description (from docstring),
        and complete parameter schema with types and required fields.

    Example:
        For a function:
        ```python
        def calculate(a: int, b: int, operation: str = "add") -> int:
            '''Perform arithmetic calculation.'''
            return a + b if operation == "add" else a - b
        ```

        Returns:
        ```python
        [
            {
                "type": "function",
                "function": {
                    "name": "calculate",
                    "description": "Perform arithmetic calculation.",
                    "parameters": {
                        "type": "object",
                        "properties": {
                            "a": {"type": "integer"},
                            "b": {"type": "integer"},
                            "operation": {"type": "string", "default": "add"},
                        },
                        "required": ["a", "b"],
                    },
                },
            }
        ]
        ```

    Note:
        Parameters listed in INJECTABLE_PARAMS (like 'state', 'config',
        'tool_call_id') are automatically excluded from the generated schema
        as they are provided by the framework during execution.
    """
    tools: list[dict] = []
    for name, fn in self._funcs.items():
        sig = inspect.signature(fn)
        params_schema: dict = {"type": "object", "properties": {}, "required": []}

        for p_name, p in sig.parameters.items():
            if p.kind in (
                inspect.Parameter.VAR_POSITIONAL,
                inspect.Parameter.VAR_KEYWORD,
            ):
                continue

            if p_name in INJECTABLE_PARAMS:
                continue

            annotation = p.annotation if p.annotation is not inspect._empty else str
            prop = SchemaMixin._annotation_to_schema(annotation, p.default)
            params_schema["properties"][p_name] = prop

            if p.default is inspect._empty:
                params_schema["required"].append(p_name)

        if not params_schema["required"]:
            params_schema.pop("required")

        description = inspect.getdoc(fn) or "No description provided."

        # provider = getattr(fn, "_py_tool_provider", None)
        # tags = getattr(fn, "_py_tool_tags", None)
        # capabilities = getattr(fn, "_py_tool_capabilities", None)

        entry = {
            "type": "function",
            "function": {
                "name": name,
                "description": description,
                "parameters": params_schema,
            },
        }
        # meta: dict[str, t.Any] = {}
        # if provider:
        #     meta["provider"] = provider
        # if tags:
        #     meta["tags"] = tags
        # if capabilities:
        #     meta["capabilities"] = capabilities
        # if meta:
        #     entry["x-pyagenity"] = meta

        tools.append(entry)

    return tools
invoke async
invoke(name, args, tool_call_id, config, state, callback_manager=Inject[CallbackManager])

Execute a specific tool by name with the provided arguments.

This method handles tool execution across all configured providers (local, MCP, Composio, LangChain) with comprehensive error handling, event publishing, and callback management.

Parameters:

Name Type Description Default
name
str

The name of the tool to execute.

required
args
dict

Dictionary of arguments to pass to the tool function.

required
tool_call_id
str

Unique identifier for this tool execution, used for tracking and result correlation.

required
config
dict[str, Any]

Configuration dictionary containing execution context and user-specific settings.

required
state
AgentState

Current agent state for context-aware tool execution.

required
callback_manager
CallbackManager

Manager for executing pre/post execution callbacks. Injected via dependency injection if not provided.

Inject[CallbackManager]

Returns:

Type Description
Any

Message object containing tool execution results, either successful

Any

output or error information with appropriate status indicators.

Example
result = await tool_node.invoke(
    name="weather_tool",
    args={"location": "Paris", "units": "metric"},
    tool_call_id="call_abc123",
    config={"user_id": "user1", "session_id": "session1"},
    state=current_agent_state,
)

# result is a Message with tool execution results
print(result.content)  # Tool output or error information
Note

The method publishes execution events throughout the process for monitoring and debugging purposes. Tool execution is routed based on tool provider precedence: MCP → Composio → LangChain → Local.

Source code in pyagenity/graph/tool_node/base.py
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
async def invoke(  # noqa: PLR0915
    self,
    name: str,
    args: dict,
    tool_call_id: str,
    config: dict[str, t.Any],
    state: AgentState,
    callback_manager: CallbackManager = Inject[CallbackManager],
) -> t.Any:
    """Execute a specific tool by name with the provided arguments.

    This method handles tool execution across all configured providers (local,
    MCP, Composio, LangChain) with comprehensive error handling, event publishing,
    and callback management.

    Args:
        name: The name of the tool to execute.
        args: Dictionary of arguments to pass to the tool function.
        tool_call_id: Unique identifier for this tool execution, used for
            tracking and result correlation.
        config: Configuration dictionary containing execution context and
            user-specific settings.
        state: Current agent state for context-aware tool execution.
        callback_manager: Manager for executing pre/post execution callbacks.
            Injected via dependency injection if not provided.

    Returns:
        Message object containing tool execution results, either successful
        output or error information with appropriate status indicators.

    Raises:
        The method handles all exceptions internally and returns error Messages
        rather than raising exceptions, ensuring robust execution flow.

    Example:
        ```python
        result = await tool_node.invoke(
            name="weather_tool",
            args={"location": "Paris", "units": "metric"},
            tool_call_id="call_abc123",
            config={"user_id": "user1", "session_id": "session1"},
            state=current_agent_state,
        )

        # result is a Message with tool execution results
        print(result.content)  # Tool output or error information
        ```

    Note:
        The method publishes execution events throughout the process for
        monitoring and debugging purposes. Tool execution is routed based
        on tool provider precedence: MCP → Composio → LangChain → Local.
    """
    logger.info("Executing tool '%s' with %d arguments", name, len(args))
    logger.debug("Tool arguments: %s", args)

    event = EventModel.default(
        config,
        data={"args": args, "tool_call_id": tool_call_id, "function_name": name},
        content_type=[ContentType.TOOL_CALL],
        event=Event.TOOL_EXECUTION,
    )
    event.node_name = name
    # Attach structured tool call block
    with contextlib.suppress(Exception):
        event.content_blocks = [ToolCallBlock(id=tool_call_id, name=name, args=args)]
    publish_event(event)

    if name in self.mcp_tools:
        event.metadata["is_mcp"] = True
        publish_event(event)
        res = await self._mcp_execute(
            name,
            args,
            tool_call_id,
            config,
            callback_manager,
        )
        event.data["message"] = res.model_dump()
        # Attach tool result block mirroring the tool output
        with contextlib.suppress(Exception):
            event.content_blocks = [
                ToolResultBlock(call_id=tool_call_id, output=res.model_dump())
            ]
        event.event_type = EventType.END
        event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
        publish_event(event)
        return res

    if name in self.composio_tools:
        event.metadata["is_composio"] = True
        publish_event(event)
        res = await self._composio_execute(
            name,
            args,
            tool_call_id,
            config,
            callback_manager,
        )
        event.data["message"] = res.model_dump()
        with contextlib.suppress(Exception):
            event.content_blocks = [
                ToolResultBlock(call_id=tool_call_id, output=res.model_dump())
            ]
        event.event_type = EventType.END
        event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
        publish_event(event)
        return res

    if name in self.langchain_tools:
        event.metadata["is_langchain"] = True
        publish_event(event)
        res = await self._langchain_execute(
            name,
            args,
            tool_call_id,
            config,
            callback_manager,
        )
        event.data["message"] = res.model_dump()
        with contextlib.suppress(Exception):
            event.content_blocks = [
                ToolResultBlock(call_id=tool_call_id, output=res.model_dump())
            ]
        event.event_type = EventType.END
        event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
        publish_event(event)
        return res

    if name in self._funcs:
        event.metadata["is_mcp"] = False
        publish_event(event)
        res = await self._internal_execute(
            name,
            args,
            tool_call_id,
            config,
            state,
            callback_manager,
        )
        event.data["message"] = res.model_dump()
        with contextlib.suppress(Exception):
            event.content_blocks = [
                ToolResultBlock(call_id=tool_call_id, output=res.model_dump())
            ]
        event.event_type = EventType.END
        event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
        publish_event(event)
        return res

    error_msg = f"Tool '{name}' not found."
    event.data["error"] = error_msg
    event.event_type = EventType.ERROR
    event.content_type = [ContentType.TOOL_RESULT, ContentType.ERROR]
    publish_event(event)
    return Message.tool_message(
        content=[
            ErrorBlock(message=error_msg),
            ToolResultBlock(
                call_id=tool_call_id,
                output=error_msg,
                is_error=True,
                status="failed",
            ),
        ],
    )
stream async
stream(name, args, tool_call_id, config, state, callback_manager=Inject[CallbackManager])

Execute a tool with streaming support, yielding incremental results.

Similar to invoke() but designed for tools that can provide streaming responses or when you want to process results as they become available. Currently, most tool providers return complete results, so this method typically yields a single Message with the full result.

Parameters:

Name Type Description Default
name
str

The name of the tool to execute.

required
args
dict

Dictionary of arguments to pass to the tool function.

required
tool_call_id
str

Unique identifier for this tool execution.

required
config
dict[str, Any]

Configuration dictionary containing execution context.

required
state
AgentState

Current agent state for context-aware tool execution.

required
callback_manager
CallbackManager

Manager for executing pre/post execution callbacks.

Inject[CallbackManager]

Yields:

Type Description
AsyncIterator[Message]

Message objects containing tool execution results or status updates.

AsyncIterator[Message]

For most tools, this will yield a single complete result Message.

Example
async for message in tool_node.stream(
    name="data_processor",
    args={"dataset": "large_data.csv"},
    tool_call_id="call_stream123",
    config={"user_id": "user1"},
    state=current_state,
):
    print(f"Received: {message.content}")
    # Process each streamed result
Note

The streaming interface is designed for future expansion where tools may provide true streaming responses. Currently, it provides a consistent async iterator interface over tool results.

Source code in pyagenity/graph/tool_node/base.py
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
async def stream(  # noqa: PLR0915
    self,
    name: str,
    args: dict,
    tool_call_id: str,
    config: dict[str, t.Any],
    state: AgentState,
    callback_manager: CallbackManager = Inject[CallbackManager],
) -> t.AsyncIterator[Message]:
    """Execute a tool with streaming support, yielding incremental results.

    Similar to invoke() but designed for tools that can provide streaming responses
    or when you want to process results as they become available. Currently,
    most tool providers return complete results, so this method typically yields
    a single Message with the full result.

    Args:
        name: The name of the tool to execute.
        args: Dictionary of arguments to pass to the tool function.
        tool_call_id: Unique identifier for this tool execution.
        config: Configuration dictionary containing execution context.
        state: Current agent state for context-aware tool execution.
        callback_manager: Manager for executing pre/post execution callbacks.

    Yields:
        Message objects containing tool execution results or status updates.
        For most tools, this will yield a single complete result Message.

    Example:
        ```python
        async for message in tool_node.stream(
            name="data_processor",
            args={"dataset": "large_data.csv"},
            tool_call_id="call_stream123",
            config={"user_id": "user1"},
            state=current_state,
        ):
            print(f"Received: {message.content}")
            # Process each streamed result
        ```

    Note:
        The streaming interface is designed for future expansion where tools
        may provide true streaming responses. Currently, it provides a
        consistent async iterator interface over tool results.
    """
    logger.info("Executing tool '%s' with %d arguments", name, len(args))
    logger.debug("Tool arguments: %s", args)
    event = EventModel.default(
        config,
        data={"args": args, "tool_call_id": tool_call_id, "function_name": name},
        content_type=[ContentType.TOOL_CALL],
        event=Event.TOOL_EXECUTION,
    )
    event.node_name = "ToolNode"
    with contextlib.suppress(Exception):
        event.content_blocks = [ToolCallBlock(id=tool_call_id, name=name, args=args)]

    if name in self.mcp_tools:
        event.metadata["function_type"] = "mcp"
        publish_event(event)
        message = await self._mcp_execute(
            name,
            args,
            tool_call_id,
            config,
            callback_manager,
        )
        event.data["message"] = message.model_dump()
        with contextlib.suppress(Exception):
            event.content_blocks = [
                ToolResultBlock(call_id=tool_call_id, output=message.model_dump())
            ]
        event.event_type = EventType.END
        event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
        publish_event(event)
        yield message
        return

    if name in self.composio_tools:
        event.metadata["function_type"] = "composio"
        publish_event(event)
        message = await self._composio_execute(
            name,
            args,
            tool_call_id,
            config,
            callback_manager,
        )
        event.data["message"] = message.model_dump()
        with contextlib.suppress(Exception):
            event.content_blocks = [
                ToolResultBlock(call_id=tool_call_id, output=message.model_dump())
            ]
        event.event_type = EventType.END
        event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
        publish_event(event)
        yield message
        return

    if name in self.langchain_tools:
        event.metadata["function_type"] = "langchain"
        publish_event(event)
        message = await self._langchain_execute(
            name,
            args,
            tool_call_id,
            config,
            callback_manager,
        )
        event.data["message"] = message.model_dump()
        with contextlib.suppress(Exception):
            event.content_blocks = [
                ToolResultBlock(call_id=tool_call_id, output=message.model_dump())
            ]
        event.event_type = EventType.END
        event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
        publish_event(event)
        yield message
        return

    if name in self._funcs:
        event.metadata["function_type"] = "internal"
        publish_event(event)

        result = await self._internal_execute(
            name,
            args,
            tool_call_id,
            config,
            state,
            callback_manager,
        )
        event.data["message"] = result.model_dump()
        with contextlib.suppress(Exception):
            event.content_blocks = [
                ToolResultBlock(call_id=tool_call_id, output=result.model_dump())
            ]
        event.event_type = EventType.END
        event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
        publish_event(event)
        yield result
        return

    error_msg = f"Tool '{name}' not found."
    event.data["error"] = error_msg
    event.event_type = EventType.ERROR
    event.content_type = [ContentType.TOOL_RESULT, ContentType.ERROR]
    publish_event(event)

    yield Message.tool_message(
        content=[
            ErrorBlock(message=error_msg),
            ToolResultBlock(
                call_id=tool_call_id,
                output=error_msg,
                is_error=True,
                status="failed",
            ),
        ],
    )

Modules

compiled_graph

Classes:

Name Description
CompiledGraph

A fully compiled and executable graph ready for workflow execution.

Attributes:

Name Type Description
StateT
logger

Attributes

StateT module-attribute
StateT = TypeVar('StateT', bound=AgentState)
logger module-attribute
logger = getLogger(__name__)

Classes

CompiledGraph

A fully compiled and executable graph ready for workflow execution.

CompiledGraph represents the final executable form of a StateGraph after compilation. It encapsulates all the execution logic, handlers, and services needed to run agent workflows. The graph supports both synchronous and asynchronous execution with comprehensive state management, checkpointing, event publishing, and streaming capabilities.

This class is generic over state types to support custom AgentState subclasses, ensuring type safety throughout the execution process.

Key Features: - Synchronous and asynchronous execution methods - Real-time streaming with incremental results - State persistence and checkpointing - Interrupt and resume capabilities - Event publishing for monitoring and debugging - Background task management - Graceful error handling and recovery

Attributes:

Name Type Description
_state

The initial/template state for graph executions.

_invoke_handler InvokeHandler[StateT]

Handler for non-streaming graph execution.

_stream_handler StreamHandler[StateT]

Handler for streaming graph execution.

_checkpointer BaseCheckpointer[StateT] | None

Optional state persistence backend.

_publisher BasePublisher | None

Optional event publishing backend.

_store BaseStore | None

Optional data storage backend.

_state_graph StateGraph[StateT]

Reference to the source StateGraph.

_interrupt_before list[str]

Nodes where execution should pause before execution.

_interrupt_after list[str]

Nodes where execution should pause after execution.

_task_manager

Manager for background async tasks.

Example
# After building and compiling a StateGraph
compiled = graph.compile()

# Synchronous execution
result = compiled.invoke({"messages": [Message.text_message("Hello")]})

# Asynchronous execution with streaming
async for chunk in compiled.astream({"messages": [message]}):
    print(f"Streamed: {chunk.content}")

# Graceful cleanup
await compiled.aclose()
Note

CompiledGraph instances should be properly closed using aclose() to release resources like database connections, background tasks, and event publishers.

Methods:

Name Description
__init__
aclose

Close the graph and release any resources.

ainvoke

Execute the graph asynchronously.

astop

Request the current graph execution to stop (async).

astream

Execute the graph asynchronously with streaming support.

generate_graph

Generate the graph representation.

invoke

Execute the graph synchronously and return the final results.

stop

Request the current graph execution to stop (sync helper).

stream

Execute the graph synchronously with streaming support.

Source code in pyagenity/graph/compiled_graph.py
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
class CompiledGraph[StateT: AgentState]:
    """A fully compiled and executable graph ready for workflow execution.

    CompiledGraph represents the final executable form of a StateGraph after compilation.
    It encapsulates all the execution logic, handlers, and services needed to run
    agent workflows. The graph supports both synchronous and asynchronous execution
    with comprehensive state management, checkpointing, event publishing, and
    streaming capabilities.

    This class is generic over state types to support custom AgentState subclasses,
    ensuring type safety throughout the execution process.

    Key Features:
    - Synchronous and asynchronous execution methods
    - Real-time streaming with incremental results
    - State persistence and checkpointing
    - Interrupt and resume capabilities
    - Event publishing for monitoring and debugging
    - Background task management
    - Graceful error handling and recovery

    Attributes:
        _state: The initial/template state for graph executions.
        _invoke_handler: Handler for non-streaming graph execution.
        _stream_handler: Handler for streaming graph execution.
        _checkpointer: Optional state persistence backend.
        _publisher: Optional event publishing backend.
        _store: Optional data storage backend.
        _state_graph: Reference to the source StateGraph.
        _interrupt_before: Nodes where execution should pause before execution.
        _interrupt_after: Nodes where execution should pause after execution.
        _task_manager: Manager for background async tasks.

    Example:
        ```python
        # After building and compiling a StateGraph
        compiled = graph.compile()

        # Synchronous execution
        result = compiled.invoke({"messages": [Message.text_message("Hello")]})

        # Asynchronous execution with streaming
        async for chunk in compiled.astream({"messages": [message]}):
            print(f"Streamed: {chunk.content}")

        # Graceful cleanup
        await compiled.aclose()
        ```

    Note:
        CompiledGraph instances should be properly closed using aclose() to
        release resources like database connections, background tasks, and
        event publishers.
    """

    def __init__(
        self,
        state: StateT,
        checkpointer: BaseCheckpointer[StateT] | None,
        publisher: BasePublisher | None,
        store: BaseStore | None,
        state_graph: StateGraph[StateT],
        interrupt_before: list[str],
        interrupt_after: list[str],
        task_manager: BackgroundTaskManager,
    ):
        logger.info(
            f"Initializing CompiledGraph with nodes: {list(state_graph.nodes.keys())}",
        )

        # Save initial state
        self._state = state

        # create handlers
        self._invoke_handler: InvokeHandler[StateT] = InvokeHandler[StateT](
            nodes=state_graph.nodes,  # type: ignore
            edges=state_graph.edges,  # type: ignore
        )
        self._stream_handler: StreamHandler[StateT] = StreamHandler[StateT](
            nodes=state_graph.nodes,  # type: ignore
            edges=state_graph.edges,  # type: ignore
        )

        self._checkpointer: BaseCheckpointer[StateT] | None = checkpointer
        self._publisher: BasePublisher | None = publisher
        self._store: BaseStore | None = store
        self._state_graph: StateGraph[StateT] = state_graph
        self._interrupt_before: list[str] = interrupt_before
        self._interrupt_after: list[str] = interrupt_after
        # generate task manager
        self._task_manager = task_manager

    def _prepare_config(
        self,
        config: dict[str, Any] | None,
        is_stream: bool = False,
    ) -> dict[str, Any]:
        cfg = config or {}
        if "is_stream" not in cfg:
            cfg["is_stream"] = is_stream
        if "user_id" not in cfg:
            cfg["user_id"] = "test-user-id"  # mock user id
        if "run_id" not in cfg:
            cfg["run_id"] = InjectQ.get_instance().try_get("generated_id") or str(uuid4())

        if "timestamp" not in cfg:
            cfg["timestamp"] = datetime.datetime.now().isoformat()

        return cfg

    def invoke(
        self,
        input_data: dict[str, Any],
        config: dict[str, Any] | None = None,
        response_granularity: ResponseGranularity = ResponseGranularity.LOW,
    ) -> dict[str, Any]:
        """Execute the graph synchronously and return the final results.

        Runs the complete graph workflow from start to finish, handling state
        management, node execution, and result formatting. This method automatically
        detects whether to start a fresh execution or resume from an interrupted state.

        The execution is synchronous but internally uses async operations, making it
        suitable for use in non-async contexts while still benefiting from async
        capabilities for I/O operations.

        Args:
            input_data: Input dictionary for graph execution. For new executions,
                should contain 'messages' key with list of initial messages.
                For resumed executions, can contain additional data to merge.
            config: Optional configuration dictionary containing execution settings:
                - user_id: Identifier for the user/session
                - thread_id: Unique identifier for this execution thread
                - run_id: Unique identifier for this specific run
                - recursion_limit: Maximum steps before stopping (default: 25)
            response_granularity: Level of detail in the response:
                - LOW: Returns only messages (default)
                - PARTIAL: Returns context, summary, and messages
                - FULL: Returns complete state and messages

        Returns:
            Dictionary containing execution results formatted according to the
            specified granularity level. Always includes execution messages
            and may include additional state information.

        Raises:
            ValueError: If input_data is invalid for new execution.
            GraphRecursionError: If execution exceeds recursion limit.
            Various exceptions: Depending on node execution failures.

        Example:
            ```python
            # Basic execution
            result = compiled.invoke({"messages": [Message.text_message("Process this data")]})
            print(result["messages"])  # Final execution messages

            # With configuration and full details
            result = compiled.invoke(
                input_data={"messages": [message]},
                config={"user_id": "user123", "thread_id": "session456", "recursion_limit": 50},
                response_granularity=ResponseGranularity.FULL,
            )
            print(result["state"])  # Complete final state
            ```

        Note:
            This method uses asyncio.run() internally, so it should not be called
            from within an async context. Use ainvoke() instead for async execution.
        """
        logger.info(
            "Starting synchronous graph execution with %d input keys, granularity=%s",
            len(input_data) if input_data else 0,
            response_granularity,
        )
        logger.debug("Input data keys: %s", list(input_data.keys()) if input_data else [])
        # Async Will Handle Event Publish

        try:
            result = asyncio.run(self.ainvoke(input_data, config, response_granularity))
            logger.info("Synchronous graph execution completed successfully")
            return result
        except Exception as e:
            logger.exception("Synchronous graph execution failed: %s", e)
            raise

    async def ainvoke(
        self,
        input_data: dict[str, Any],
        config: dict[str, Any] | None = None,
        response_granularity: ResponseGranularity = ResponseGranularity.LOW,
    ) -> dict[str, Any]:
        """Execute the graph asynchronously.

        Auto-detects whether to start fresh execution or resume from interrupted state
        based on the AgentState's execution metadata.

        Args:
            input_data: Input dict with 'messages' key (for new execution) or
                       additional data for resuming
            config: Configuration dictionary
            response_granularity: Response parsing granularity

        Returns:
            Response dict based on granularity
        """
        cfg = self._prepare_config(config, is_stream=False)

        return await self._invoke_handler.invoke(
            input_data,
            cfg,
            self._state,
            response_granularity,
        )

    def stop(self, config: dict[str, Any]) -> dict[str, Any]:
        """Request the current graph execution to stop (sync helper).

        This sets a stop flag in the checkpointer's thread store keyed by thread_id.
        Handlers periodically check this flag and interrupt execution.
        Returns a small status dict.
        """
        return asyncio.run(self.astop(config))

    async def astop(self, config: dict[str, Any]) -> dict[str, Any]:
        """Request the current graph execution to stop (async).

        Contract:
        - Requires a valid thread_id in config
        - If no active thread or no checkpointer, returns not-running
        - If state exists and is running, set stop_requested flag in thread info
        """
        cfg = self._prepare_config(config, is_stream=bool(config.get("is_stream", False)))
        if not self._checkpointer:
            return {"ok": False, "reason": "no-checkpointer"}

        # Load state to see if this thread is running
        state = await self._checkpointer.aget_state_cache(
            cfg
        ) or await self._checkpointer.aget_state(cfg)
        if not state:
            return {"ok": False, "running": False, "reason": "no-state"}

        running = state.is_running() and not state.is_interrupted()
        # Set stop flag regardless; handlers will act if running
        if running:
            state.execution_meta.stop_current_execution = StopRequestStatus.STOP_REQUESTED
            # update cache
            # Cache update is enough; state will be picked up by running execution
            # As its running, cache will be available immediately
            await self._checkpointer.aput_state_cache(cfg, state)
            # Fixme: consider putting to main state as well
            # await self._checkpointer.aput_state(cfg, state)
            logger.info("Set stop_current_execution flag for thread_id: %s", cfg.get("thread_id"))
            return {"ok": True, "running": running}

        logger.info(
            "No running execution to stop for thread_id: %s (running=%s, interrupted=%s)",
            cfg.get("thread_id"),
            running,
            state.is_interrupted(),
        )
        return {"ok": True, "running": running, "reason": "not-running"}

    def stream(
        self,
        input_data: dict[str, Any],
        config: dict[str, Any] | None = None,
        response_granularity: ResponseGranularity = ResponseGranularity.LOW,
    ) -> Generator[Message]:
        """Execute the graph synchronously with streaming support.

        Yields Message objects containing incremental responses.
        If nodes return streaming responses, yields them directly.
        If nodes return complete responses, simulates streaming by chunking.

        Args:
            input_data: Input dict
            config: Configuration dictionary
            response_granularity: Response parsing granularity

        Yields:
            Message objects with incremental content
        """

        # For sync streaming, we'll use asyncio.run to handle the async implementation
        async def _async_stream():
            async for chunk in self.astream(input_data, config, response_granularity):
                yield chunk

        # Convert async generator to sync iteration with a dedicated event loop
        gen = _async_stream()
        loop = asyncio.new_event_loop()
        policy = asyncio.get_event_loop_policy()
        try:
            previous_loop = policy.get_event_loop()
        except Exception:
            previous_loop = None
        asyncio.set_event_loop(loop)
        logger.info("Synchronous streaming started")

        try:
            while True:
                try:
                    chunk = loop.run_until_complete(gen.__anext__())
                    yield chunk
                except StopAsyncIteration:
                    break
        finally:
            # Attempt to close the async generator cleanly
            with contextlib.suppress(Exception):
                loop.run_until_complete(gen.aclose())  # type: ignore[attr-defined]
            # Restore previous loop if any, then close created loop
            try:
                if previous_loop is not None:
                    asyncio.set_event_loop(previous_loop)
            finally:
                loop.close()
        logger.info("Synchronous streaming completed")

    async def astream(
        self,
        input_data: dict[str, Any],
        config: dict[str, Any] | None = None,
        response_granularity: ResponseGranularity = ResponseGranularity.LOW,
    ) -> AsyncIterator[Message]:
        """Execute the graph asynchronously with streaming support.

        Yields Message objects containing incremental responses.
        If nodes return streaming responses, yields them directly.
        If nodes return complete responses, simulates streaming by chunking.

        Args:
            input_data: Input dict
            config: Configuration dictionary
            response_granularity: Response parsing granularity

        Yields:
            Message objects with incremental content
        """

        cfg = self._prepare_config(config, is_stream=True)

        async for chunk in self._stream_handler.stream(
            input_data,
            cfg,
            self._state,
            response_granularity,
        ):
            yield chunk

    async def aclose(self) -> dict[str, str]:
        """Close the graph and release any resources."""
        # close checkpointer
        stats = {}
        try:
            if self._checkpointer:
                await self._checkpointer.arelease()
                logger.info("Checkpointer closed successfully")
                stats["checkpointer"] = "closed"
        except Exception as e:
            stats["checkpointer"] = f"error: {e}"
            logger.error(f"Error closing graph: {e}")

        # Close Publisher
        try:
            if self._publisher:
                await self._publisher.close()
                logger.info("Publisher closed successfully")
                stats["publisher"] = "closed"
        except Exception as e:
            stats["publisher"] = f"error: {e}"
            logger.error(f"Error closing publisher: {e}")

        # Close Store
        try:
            if self._store:
                await self._store.arelease()
                logger.info("Store closed successfully")
                stats["store"] = "closed"
        except Exception as e:
            stats["store"] = f"error: {e}"
            logger.error(f"Error closing store: {e}")

        # Wait for all background tasks to complete
        try:
            await self._task_manager.wait_for_all()
            logger.info("All background tasks completed successfully")
            stats["background_tasks"] = "completed"
        except Exception as e:
            stats["background_tasks"] = f"error: {e}"
            logger.error(f"Error waiting for background tasks: {e}")

        logger.info(f"Graph close stats: {stats}")
        # You can also return or process the stats as needed
        return stats

    def generate_graph(self) -> dict[str, Any]:
        """Generate the graph representation.

        Returns:
            A dictionary representing the graph structure.
        """
        graph = {
            "info": {},
            "nodes": [],
            "edges": [],
        }
        # Populate the graph with nodes and edges
        for node_name in self._state_graph.nodes:
            graph["nodes"].append(
                {
                    "id": str(uuid4()),
                    "name": node_name,
                }
            )

        for edge in self._state_graph.edges:
            graph["edges"].append(
                {
                    "id": str(uuid4()),
                    "source": edge.from_node,
                    "target": edge.to_node,
                }
            )

        # Add few more extra info
        graph["info"] = {
            "node_count": len(graph["nodes"]),
            "edge_count": len(graph["edges"]),
            "checkpointer": self._checkpointer is not None,
            "checkpointer_type": type(self._checkpointer).__name__ if self._checkpointer else None,
            "publisher": self._publisher is not None,
            "store": self._store is not None,
            "interrupt_before": self._interrupt_before,
            "interrupt_after": self._interrupt_after,
            "context_type": self._state_graph._context_manager.__class__.__name__,
            "id_generator": self._state_graph._id_generator.__class__.__name__,
            "id_type": self._state_graph._id_generator.id_type.value,
            "state_type": self._state.__class__.__name__,
            "state_fields": list(self._state.model_dump().keys()),
        }
        return graph
Functions
__init__
__init__(state, checkpointer, publisher, store, state_graph, interrupt_before, interrupt_after, task_manager)
Source code in pyagenity/graph/compiled_graph.py
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
def __init__(
    self,
    state: StateT,
    checkpointer: BaseCheckpointer[StateT] | None,
    publisher: BasePublisher | None,
    store: BaseStore | None,
    state_graph: StateGraph[StateT],
    interrupt_before: list[str],
    interrupt_after: list[str],
    task_manager: BackgroundTaskManager,
):
    logger.info(
        f"Initializing CompiledGraph with nodes: {list(state_graph.nodes.keys())}",
    )

    # Save initial state
    self._state = state

    # create handlers
    self._invoke_handler: InvokeHandler[StateT] = InvokeHandler[StateT](
        nodes=state_graph.nodes,  # type: ignore
        edges=state_graph.edges,  # type: ignore
    )
    self._stream_handler: StreamHandler[StateT] = StreamHandler[StateT](
        nodes=state_graph.nodes,  # type: ignore
        edges=state_graph.edges,  # type: ignore
    )

    self._checkpointer: BaseCheckpointer[StateT] | None = checkpointer
    self._publisher: BasePublisher | None = publisher
    self._store: BaseStore | None = store
    self._state_graph: StateGraph[StateT] = state_graph
    self._interrupt_before: list[str] = interrupt_before
    self._interrupt_after: list[str] = interrupt_after
    # generate task manager
    self._task_manager = task_manager
aclose async
aclose()

Close the graph and release any resources.

Source code in pyagenity/graph/compiled_graph.py
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
async def aclose(self) -> dict[str, str]:
    """Close the graph and release any resources."""
    # close checkpointer
    stats = {}
    try:
        if self._checkpointer:
            await self._checkpointer.arelease()
            logger.info("Checkpointer closed successfully")
            stats["checkpointer"] = "closed"
    except Exception as e:
        stats["checkpointer"] = f"error: {e}"
        logger.error(f"Error closing graph: {e}")

    # Close Publisher
    try:
        if self._publisher:
            await self._publisher.close()
            logger.info("Publisher closed successfully")
            stats["publisher"] = "closed"
    except Exception as e:
        stats["publisher"] = f"error: {e}"
        logger.error(f"Error closing publisher: {e}")

    # Close Store
    try:
        if self._store:
            await self._store.arelease()
            logger.info("Store closed successfully")
            stats["store"] = "closed"
    except Exception as e:
        stats["store"] = f"error: {e}"
        logger.error(f"Error closing store: {e}")

    # Wait for all background tasks to complete
    try:
        await self._task_manager.wait_for_all()
        logger.info("All background tasks completed successfully")
        stats["background_tasks"] = "completed"
    except Exception as e:
        stats["background_tasks"] = f"error: {e}"
        logger.error(f"Error waiting for background tasks: {e}")

    logger.info(f"Graph close stats: {stats}")
    # You can also return or process the stats as needed
    return stats
ainvoke async
ainvoke(input_data, config=None, response_granularity=ResponseGranularity.LOW)

Execute the graph asynchronously.

Auto-detects whether to start fresh execution or resume from interrupted state based on the AgentState's execution metadata.

Parameters:

Name Type Description Default
input_data dict[str, Any]

Input dict with 'messages' key (for new execution) or additional data for resuming

required
config dict[str, Any] | None

Configuration dictionary

None
response_granularity ResponseGranularity

Response parsing granularity

LOW

Returns:

Type Description
dict[str, Any]

Response dict based on granularity

Source code in pyagenity/graph/compiled_graph.py
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
async def ainvoke(
    self,
    input_data: dict[str, Any],
    config: dict[str, Any] | None = None,
    response_granularity: ResponseGranularity = ResponseGranularity.LOW,
) -> dict[str, Any]:
    """Execute the graph asynchronously.

    Auto-detects whether to start fresh execution or resume from interrupted state
    based on the AgentState's execution metadata.

    Args:
        input_data: Input dict with 'messages' key (for new execution) or
                   additional data for resuming
        config: Configuration dictionary
        response_granularity: Response parsing granularity

    Returns:
        Response dict based on granularity
    """
    cfg = self._prepare_config(config, is_stream=False)

    return await self._invoke_handler.invoke(
        input_data,
        cfg,
        self._state,
        response_granularity,
    )
astop async
astop(config)

Request the current graph execution to stop (async).

Contract: - Requires a valid thread_id in config - If no active thread or no checkpointer, returns not-running - If state exists and is running, set stop_requested flag in thread info

Source code in pyagenity/graph/compiled_graph.py
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
async def astop(self, config: dict[str, Any]) -> dict[str, Any]:
    """Request the current graph execution to stop (async).

    Contract:
    - Requires a valid thread_id in config
    - If no active thread or no checkpointer, returns not-running
    - If state exists and is running, set stop_requested flag in thread info
    """
    cfg = self._prepare_config(config, is_stream=bool(config.get("is_stream", False)))
    if not self._checkpointer:
        return {"ok": False, "reason": "no-checkpointer"}

    # Load state to see if this thread is running
    state = await self._checkpointer.aget_state_cache(
        cfg
    ) or await self._checkpointer.aget_state(cfg)
    if not state:
        return {"ok": False, "running": False, "reason": "no-state"}

    running = state.is_running() and not state.is_interrupted()
    # Set stop flag regardless; handlers will act if running
    if running:
        state.execution_meta.stop_current_execution = StopRequestStatus.STOP_REQUESTED
        # update cache
        # Cache update is enough; state will be picked up by running execution
        # As its running, cache will be available immediately
        await self._checkpointer.aput_state_cache(cfg, state)
        # Fixme: consider putting to main state as well
        # await self._checkpointer.aput_state(cfg, state)
        logger.info("Set stop_current_execution flag for thread_id: %s", cfg.get("thread_id"))
        return {"ok": True, "running": running}

    logger.info(
        "No running execution to stop for thread_id: %s (running=%s, interrupted=%s)",
        cfg.get("thread_id"),
        running,
        state.is_interrupted(),
    )
    return {"ok": True, "running": running, "reason": "not-running"}
astream async
astream(input_data, config=None, response_granularity=ResponseGranularity.LOW)

Execute the graph asynchronously with streaming support.

Yields Message objects containing incremental responses. If nodes return streaming responses, yields them directly. If nodes return complete responses, simulates streaming by chunking.

Parameters:

Name Type Description Default
input_data dict[str, Any]

Input dict

required
config dict[str, Any] | None

Configuration dictionary

None
response_granularity ResponseGranularity

Response parsing granularity

LOW

Yields:

Type Description
AsyncIterator[Message]

Message objects with incremental content

Source code in pyagenity/graph/compiled_graph.py
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
async def astream(
    self,
    input_data: dict[str, Any],
    config: dict[str, Any] | None = None,
    response_granularity: ResponseGranularity = ResponseGranularity.LOW,
) -> AsyncIterator[Message]:
    """Execute the graph asynchronously with streaming support.

    Yields Message objects containing incremental responses.
    If nodes return streaming responses, yields them directly.
    If nodes return complete responses, simulates streaming by chunking.

    Args:
        input_data: Input dict
        config: Configuration dictionary
        response_granularity: Response parsing granularity

    Yields:
        Message objects with incremental content
    """

    cfg = self._prepare_config(config, is_stream=True)

    async for chunk in self._stream_handler.stream(
        input_data,
        cfg,
        self._state,
        response_granularity,
    ):
        yield chunk
generate_graph
generate_graph()

Generate the graph representation.

Returns:

Type Description
dict[str, Any]

A dictionary representing the graph structure.

Source code in pyagenity/graph/compiled_graph.py
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
def generate_graph(self) -> dict[str, Any]:
    """Generate the graph representation.

    Returns:
        A dictionary representing the graph structure.
    """
    graph = {
        "info": {},
        "nodes": [],
        "edges": [],
    }
    # Populate the graph with nodes and edges
    for node_name in self._state_graph.nodes:
        graph["nodes"].append(
            {
                "id": str(uuid4()),
                "name": node_name,
            }
        )

    for edge in self._state_graph.edges:
        graph["edges"].append(
            {
                "id": str(uuid4()),
                "source": edge.from_node,
                "target": edge.to_node,
            }
        )

    # Add few more extra info
    graph["info"] = {
        "node_count": len(graph["nodes"]),
        "edge_count": len(graph["edges"]),
        "checkpointer": self._checkpointer is not None,
        "checkpointer_type": type(self._checkpointer).__name__ if self._checkpointer else None,
        "publisher": self._publisher is not None,
        "store": self._store is not None,
        "interrupt_before": self._interrupt_before,
        "interrupt_after": self._interrupt_after,
        "context_type": self._state_graph._context_manager.__class__.__name__,
        "id_generator": self._state_graph._id_generator.__class__.__name__,
        "id_type": self._state_graph._id_generator.id_type.value,
        "state_type": self._state.__class__.__name__,
        "state_fields": list(self._state.model_dump().keys()),
    }
    return graph
invoke
invoke(input_data, config=None, response_granularity=ResponseGranularity.LOW)

Execute the graph synchronously and return the final results.

Runs the complete graph workflow from start to finish, handling state management, node execution, and result formatting. This method automatically detects whether to start a fresh execution or resume from an interrupted state.

The execution is synchronous but internally uses async operations, making it suitable for use in non-async contexts while still benefiting from async capabilities for I/O operations.

Parameters:

Name Type Description Default
input_data dict[str, Any]

Input dictionary for graph execution. For new executions, should contain 'messages' key with list of initial messages. For resumed executions, can contain additional data to merge.

required
config dict[str, Any] | None

Optional configuration dictionary containing execution settings: - user_id: Identifier for the user/session - thread_id: Unique identifier for this execution thread - run_id: Unique identifier for this specific run - recursion_limit: Maximum steps before stopping (default: 25)

None
response_granularity ResponseGranularity

Level of detail in the response: - LOW: Returns only messages (default) - PARTIAL: Returns context, summary, and messages - FULL: Returns complete state and messages

LOW

Returns:

Type Description
dict[str, Any]

Dictionary containing execution results formatted according to the

dict[str, Any]

specified granularity level. Always includes execution messages

dict[str, Any]

and may include additional state information.

Raises:

Type Description
ValueError

If input_data is invalid for new execution.

GraphRecursionError

If execution exceeds recursion limit.

Various exceptions

Depending on node execution failures.

Example
# Basic execution
result = compiled.invoke({"messages": [Message.text_message("Process this data")]})
print(result["messages"])  # Final execution messages

# With configuration and full details
result = compiled.invoke(
    input_data={"messages": [message]},
    config={"user_id": "user123", "thread_id": "session456", "recursion_limit": 50},
    response_granularity=ResponseGranularity.FULL,
)
print(result["state"])  # Complete final state
Note

This method uses asyncio.run() internally, so it should not be called from within an async context. Use ainvoke() instead for async execution.

Source code in pyagenity/graph/compiled_graph.py
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
def invoke(
    self,
    input_data: dict[str, Any],
    config: dict[str, Any] | None = None,
    response_granularity: ResponseGranularity = ResponseGranularity.LOW,
) -> dict[str, Any]:
    """Execute the graph synchronously and return the final results.

    Runs the complete graph workflow from start to finish, handling state
    management, node execution, and result formatting. This method automatically
    detects whether to start a fresh execution or resume from an interrupted state.

    The execution is synchronous but internally uses async operations, making it
    suitable for use in non-async contexts while still benefiting from async
    capabilities for I/O operations.

    Args:
        input_data: Input dictionary for graph execution. For new executions,
            should contain 'messages' key with list of initial messages.
            For resumed executions, can contain additional data to merge.
        config: Optional configuration dictionary containing execution settings:
            - user_id: Identifier for the user/session
            - thread_id: Unique identifier for this execution thread
            - run_id: Unique identifier for this specific run
            - recursion_limit: Maximum steps before stopping (default: 25)
        response_granularity: Level of detail in the response:
            - LOW: Returns only messages (default)
            - PARTIAL: Returns context, summary, and messages
            - FULL: Returns complete state and messages

    Returns:
        Dictionary containing execution results formatted according to the
        specified granularity level. Always includes execution messages
        and may include additional state information.

    Raises:
        ValueError: If input_data is invalid for new execution.
        GraphRecursionError: If execution exceeds recursion limit.
        Various exceptions: Depending on node execution failures.

    Example:
        ```python
        # Basic execution
        result = compiled.invoke({"messages": [Message.text_message("Process this data")]})
        print(result["messages"])  # Final execution messages

        # With configuration and full details
        result = compiled.invoke(
            input_data={"messages": [message]},
            config={"user_id": "user123", "thread_id": "session456", "recursion_limit": 50},
            response_granularity=ResponseGranularity.FULL,
        )
        print(result["state"])  # Complete final state
        ```

    Note:
        This method uses asyncio.run() internally, so it should not be called
        from within an async context. Use ainvoke() instead for async execution.
    """
    logger.info(
        "Starting synchronous graph execution with %d input keys, granularity=%s",
        len(input_data) if input_data else 0,
        response_granularity,
    )
    logger.debug("Input data keys: %s", list(input_data.keys()) if input_data else [])
    # Async Will Handle Event Publish

    try:
        result = asyncio.run(self.ainvoke(input_data, config, response_granularity))
        logger.info("Synchronous graph execution completed successfully")
        return result
    except Exception as e:
        logger.exception("Synchronous graph execution failed: %s", e)
        raise
stop
stop(config)

Request the current graph execution to stop (sync helper).

This sets a stop flag in the checkpointer's thread store keyed by thread_id. Handlers periodically check this flag and interrupt execution. Returns a small status dict.

Source code in pyagenity/graph/compiled_graph.py
251
252
253
254
255
256
257
258
def stop(self, config: dict[str, Any]) -> dict[str, Any]:
    """Request the current graph execution to stop (sync helper).

    This sets a stop flag in the checkpointer's thread store keyed by thread_id.
    Handlers periodically check this flag and interrupt execution.
    Returns a small status dict.
    """
    return asyncio.run(self.astop(config))
stream
stream(input_data, config=None, response_granularity=ResponseGranularity.LOW)

Execute the graph synchronously with streaming support.

Yields Message objects containing incremental responses. If nodes return streaming responses, yields them directly. If nodes return complete responses, simulates streaming by chunking.

Parameters:

Name Type Description Default
input_data dict[str, Any]

Input dict

required
config dict[str, Any] | None

Configuration dictionary

None
response_granularity ResponseGranularity

Response parsing granularity

LOW

Yields:

Type Description
Generator[Message]

Message objects with incremental content

Source code in pyagenity/graph/compiled_graph.py
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
def stream(
    self,
    input_data: dict[str, Any],
    config: dict[str, Any] | None = None,
    response_granularity: ResponseGranularity = ResponseGranularity.LOW,
) -> Generator[Message]:
    """Execute the graph synchronously with streaming support.

    Yields Message objects containing incremental responses.
    If nodes return streaming responses, yields them directly.
    If nodes return complete responses, simulates streaming by chunking.

    Args:
        input_data: Input dict
        config: Configuration dictionary
        response_granularity: Response parsing granularity

    Yields:
        Message objects with incremental content
    """

    # For sync streaming, we'll use asyncio.run to handle the async implementation
    async def _async_stream():
        async for chunk in self.astream(input_data, config, response_granularity):
            yield chunk

    # Convert async generator to sync iteration with a dedicated event loop
    gen = _async_stream()
    loop = asyncio.new_event_loop()
    policy = asyncio.get_event_loop_policy()
    try:
        previous_loop = policy.get_event_loop()
    except Exception:
        previous_loop = None
    asyncio.set_event_loop(loop)
    logger.info("Synchronous streaming started")

    try:
        while True:
            try:
                chunk = loop.run_until_complete(gen.__anext__())
                yield chunk
            except StopAsyncIteration:
                break
    finally:
        # Attempt to close the async generator cleanly
        with contextlib.suppress(Exception):
            loop.run_until_complete(gen.aclose())  # type: ignore[attr-defined]
        # Restore previous loop if any, then close created loop
        try:
            if previous_loop is not None:
                asyncio.set_event_loop(previous_loop)
        finally:
            loop.close()
    logger.info("Synchronous streaming completed")

edge

Graph edge representation and routing logic for PyAgenity workflows.

This module defines the Edge class, which represents connections between nodes in a PyAgenity graph workflow. Edges can be either static (always followed) or conditional (followed only when certain conditions are met), enabling complex routing logic and decision-making within graph execution.

Edges are fundamental building blocks that define the flow of execution through a graph, determining which node should execute next based on the current state and any conditional logic.

Classes:

Name Description
Edge

Represents a connection between two nodes in a graph workflow.

Attributes:

Name Type Description
logger

Attributes

logger module-attribute
logger = getLogger(__name__)

Classes

Edge

Represents a connection between two nodes in a graph workflow.

An Edge defines the relationship and routing logic between nodes, specifying how execution should flow from one node to another. Edges can be either static (unconditional) or conditional based on runtime state evaluation.

Edges support complex routing scenarios including: - Simple static connections between nodes - Conditional routing based on state evaluation - Dynamic routing with multiple possible destinations - Decision trees and branching logic

Attributes:

Name Type Description
from_node

Name of the source node where execution originates.

to_node

Name of the destination node where execution continues.

condition

Optional callable that determines if this edge should be followed. If None, the edge is always followed (static edge).

condition_result str | None

Optional value to match against condition result for mapped conditional edges.

Example
# Static edge - always followed
static_edge = Edge("start", "process")


# Conditional edge - followed only if condition returns True
def needs_approval(state):
    return state.data.get("requires_approval", False)


conditional_edge = Edge("process", "approval", condition=needs_approval)


# Mapped conditional edge - follows based on specific condition result
def get_priority(state):
    return state.data.get("priority", "normal")


high_priority_edge = Edge("triage", "urgent", condition=get_priority)
high_priority_edge.condition_result = "high"

Methods:

Name Description
__init__

Initialize a new Edge with source, destination, and optional condition.

Source code in pyagenity/graph/edge.py
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
class Edge:
    """Represents a connection between two nodes in a graph workflow.

    An Edge defines the relationship and routing logic between nodes, specifying
    how execution should flow from one node to another. Edges can be either
    static (unconditional) or conditional based on runtime state evaluation.

    Edges support complex routing scenarios including:
    - Simple static connections between nodes
    - Conditional routing based on state evaluation
    - Dynamic routing with multiple possible destinations
    - Decision trees and branching logic

    Attributes:
        from_node: Name of the source node where execution originates.
        to_node: Name of the destination node where execution continues.
        condition: Optional callable that determines if this edge should be
            followed. If None, the edge is always followed (static edge).
        condition_result: Optional value to match against condition result
            for mapped conditional edges.

    Example:
        ```python
        # Static edge - always followed
        static_edge = Edge("start", "process")


        # Conditional edge - followed only if condition returns True
        def needs_approval(state):
            return state.data.get("requires_approval", False)


        conditional_edge = Edge("process", "approval", condition=needs_approval)


        # Mapped conditional edge - follows based on specific condition result
        def get_priority(state):
            return state.data.get("priority", "normal")


        high_priority_edge = Edge("triage", "urgent", condition=get_priority)
        high_priority_edge.condition_result = "high"
        ```
    """

    def __init__(
        self,
        from_node: str,
        to_node: str,
        condition: Callable | None = None,
    ):
        """Initialize a new Edge with source, destination, and optional condition.

        Args:
            from_node: Name of the source node. Must match a node name in the graph.
            to_node: Name of the destination node. Must match a node name in the graph
                or be a special constant like END.
            condition: Optional callable that takes an AgentState as argument and
                returns a value to determine if this edge should be followed.
                If None, this is a static edge that's always followed.

        Note:
            The condition function should be deterministic and side-effect free
            for predictable execution behavior. It receives the current AgentState
            and should return a boolean (for simple conditions) or a string/value
            (for mapped conditional routing).
        """
        logger.debug(
            "Creating edge from '%s' to '%s' with condition=%s",
            from_node,
            to_node,
            "yes" if condition else "no",
        )
        self.from_node = from_node
        self.to_node = to_node
        self.condition = condition
        self.condition_result: str | None = None
Attributes
condition instance-attribute
condition = condition
condition_result instance-attribute
condition_result = None
from_node instance-attribute
from_node = from_node
to_node instance-attribute
to_node = to_node
Functions
__init__
__init__(from_node, to_node, condition=None)

Initialize a new Edge with source, destination, and optional condition.

Parameters:

Name Type Description Default
from_node str

Name of the source node. Must match a node name in the graph.

required
to_node str

Name of the destination node. Must match a node name in the graph or be a special constant like END.

required
condition Callable | None

Optional callable that takes an AgentState as argument and returns a value to determine if this edge should be followed. If None, this is a static edge that's always followed.

None
Note

The condition function should be deterministic and side-effect free for predictable execution behavior. It receives the current AgentState and should return a boolean (for simple conditions) or a string/value (for mapped conditional routing).

Source code in pyagenity/graph/edge.py
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
def __init__(
    self,
    from_node: str,
    to_node: str,
    condition: Callable | None = None,
):
    """Initialize a new Edge with source, destination, and optional condition.

    Args:
        from_node: Name of the source node. Must match a node name in the graph.
        to_node: Name of the destination node. Must match a node name in the graph
            or be a special constant like END.
        condition: Optional callable that takes an AgentState as argument and
            returns a value to determine if this edge should be followed.
            If None, this is a static edge that's always followed.

    Note:
        The condition function should be deterministic and side-effect free
        for predictable execution behavior. It receives the current AgentState
        and should return a boolean (for simple conditions) or a string/value
        (for mapped conditional routing).
    """
    logger.debug(
        "Creating edge from '%s' to '%s' with condition=%s",
        from_node,
        to_node,
        "yes" if condition else "no",
    )
    self.from_node = from_node
    self.to_node = to_node
    self.condition = condition
    self.condition_result: str | None = None

node

Node execution and management for PyAgenity graph workflows.

This module defines the Node class, which represents executable units within a PyAgenity graph workflow. Nodes encapsulate functions or ToolNode instances that perform specific tasks, handle dependency injection, manage execution context, and support both synchronous and streaming execution modes.

Nodes are the fundamental building blocks of graph workflows, responsible for processing state, executing business logic, and producing outputs that drive the workflow forward. They integrate seamlessly with PyAgenity's dependency injection system and callback management framework.

Classes:

Name Description
Node

Represents a node in the graph workflow.

Attributes:

Name Type Description
logger

Attributes

logger module-attribute
logger = getLogger(__name__)

Classes

Node

Represents a node in the graph workflow.

A Node encapsulates a function or ToolNode that can be executed as part of a graph workflow. It handles dependency injection, parameter mapping, and execution context management.

The Node class supports both regular callable functions and ToolNode instances for handling tool-based operations. It automatically injects dependencies based on function signatures and provides legacy parameter support.

Attributes:

Name Type Description
name str

Unique identifier for the node within the graph.

func Union[Callable, ToolNode]

The function or ToolNode to execute.

Example

def my_function(state, config): ... return {"result": "processed"} node = Node("processor", my_function) result = await node.execute(state, config)

Methods:

Name Description
__init__

Initialize a new Node instance with function and dependencies.

execute

Execute the node function with comprehensive context and callback support.

stream

Execute the node function with streaming output support.

Source code in pyagenity/graph/node.py
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
class Node:
    """Represents a node in the graph workflow.

    A Node encapsulates a function or ToolNode that can be executed as part of
    a graph workflow. It handles dependency injection, parameter mapping, and
    execution context management.

    The Node class supports both regular callable functions and ToolNode instances
    for handling tool-based operations. It automatically injects dependencies
    based on function signatures and provides legacy parameter support.

    Attributes:
        name (str): Unique identifier for the node within the graph.
        func (Union[Callable, ToolNode]): The function or ToolNode to execute.

    Example:
        >>> def my_function(state, config):
        ...     return {"result": "processed"}
        >>> node = Node("processor", my_function)
        >>> result = await node.execute(state, config)
    """

    def __init__(
        self,
        name: str,
        func: Union[Callable, "ToolNode"],
        publisher: BasePublisher | None = Inject[BasePublisher],
    ):
        """Initialize a new Node instance with function and dependencies.

        Args:
            name: Unique identifier for the node within the graph. This name
                is used for routing, logging, and referencing the node in
                graph configuration.
            func: The function or ToolNode to execute when this node is called.
                Functions should accept at least 'state' and 'config' parameters.
                ToolNode instances handle tool-based operations and provide
                their own execution logic.
            publisher: Optional event publisher for execution monitoring.
                Injected via dependency injection if not explicitly provided.
                Used for publishing node execution events and status updates.

        Note:
            The function signature is automatically analyzed to determine
            required parameters and dependency injection points. Parameters
            matching injectable service names will be automatically provided
            by the framework during execution.
        """
        logger.debug(
            "Initializing node '%s' with func=%s",
            name,
            getattr(func, "__name__", type(func).__name__),
        )
        self.name = name
        self.func = func
        self.publisher = publisher
        self.invoke_handler = InvokeNodeHandler(
            name,
            func,
        )

        self.stream_handler = StreamNodeHandler(
            name,
            func,
        )

    async def execute(
        self,
        config: dict[str, Any],
        state: AgentState,
        callback_mgr: CallbackManager = Inject[CallbackManager],
    ) -> dict[str, Any] | list[Message]:
        """Execute the node function with comprehensive context and callback support.

        Executes the node's function or ToolNode with full dependency injection,
        callback hook execution, and error handling. This method provides the
        complete execution environment including state access, configuration,
        and injected services.

        Args:
            config: Configuration dictionary containing execution context,
                user settings, thread identification, and runtime parameters.
            state: Current AgentState providing workflow context, message history,
                and shared state information accessible to the node function.
            callback_mgr: Callback manager for executing pre/post execution hooks.
                Injected via dependency injection if not explicitly provided.

        Returns:
            Either a dictionary containing updated state and execution results,
            or a list of Message objects representing the node's output.
            The return type depends on the node function's implementation.

        Raises:
            Various exceptions depending on node function behavior. All exceptions
            are handled by the callback manager's error handling hooks before
            being propagated.

        Example:
            ```python
            # Node function that returns messages
            def process_data(state, config):
                result = process(state.data)
                return [Message.text_message(f"Processed: {result}")]


            node = Node("processor", process_data)
            messages = await node.execute(config, state)
            ```

        Note:
            The node function receives dependency-injected parameters based on
            its signature. Common injectable parameters include 'state', 'config',
            'context_manager', 'publisher', and other framework services.
        """
        return await self.invoke_handler.invoke(
            config,
            state,
            callback_mgr,
        )

    async def stream(
        self,
        config: dict[str, Any],
        state: AgentState,
        callback_mgr: CallbackManager = Inject[CallbackManager],
    ) -> AsyncIterable[dict[str, Any] | Message]:
        """Execute the node function with streaming output support.

        Similar to execute() but designed for streaming scenarios where the node
        function can produce incremental results. This method provides an async
        iterator interface over the node's outputs, allowing for real-time
        processing and response streaming.

        Args:
            config: Configuration dictionary with execution context and settings.
            state: Current AgentState providing workflow context and shared state.
            callback_mgr: Callback manager for pre/post execution hook handling.

        Yields:
            Dictionary objects or Message instances representing incremental
            outputs from the node function. The exact type and frequency of
            yields depends on the node function's streaming implementation.

        Example:
            ```python
            async def streaming_processor(state, config):
                for item in large_dataset:
                    result = process_item(item)
                    yield Message.text_message(f"Processed item: {result}")


            node = Node("stream_processor", streaming_processor)
            async for output in node.stream(config, state):
                print(f"Streamed: {output.content}")
            ```

        Note:
            Not all node functions support streaming. For non-streaming functions,
            this method will yield a single result equivalent to calling execute().
            The streaming capability is determined by the node function's implementation.
        """
        result = self.stream_handler.stream(
            config,
            state,
            callback_mgr,
        )

        async for item in result:
            yield item
Attributes
func instance-attribute
func = func
invoke_handler instance-attribute
invoke_handler = InvokeNodeHandler(name, func)
name instance-attribute
name = name
publisher instance-attribute
publisher = publisher
stream_handler instance-attribute
stream_handler = StreamNodeHandler(name, func)
Functions
__init__
__init__(name, func, publisher=Inject[BasePublisher])

Initialize a new Node instance with function and dependencies.

Parameters:

Name Type Description Default
name str

Unique identifier for the node within the graph. This name is used for routing, logging, and referencing the node in graph configuration.

required
func Union[Callable, ToolNode]

The function or ToolNode to execute when this node is called. Functions should accept at least 'state' and 'config' parameters. ToolNode instances handle tool-based operations and provide their own execution logic.

required
publisher BasePublisher | None

Optional event publisher for execution monitoring. Injected via dependency injection if not explicitly provided. Used for publishing node execution events and status updates.

Inject[BasePublisher]
Note

The function signature is automatically analyzed to determine required parameters and dependency injection points. Parameters matching injectable service names will be automatically provided by the framework during execution.

Source code in pyagenity/graph/node.py
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
def __init__(
    self,
    name: str,
    func: Union[Callable, "ToolNode"],
    publisher: BasePublisher | None = Inject[BasePublisher],
):
    """Initialize a new Node instance with function and dependencies.

    Args:
        name: Unique identifier for the node within the graph. This name
            is used for routing, logging, and referencing the node in
            graph configuration.
        func: The function or ToolNode to execute when this node is called.
            Functions should accept at least 'state' and 'config' parameters.
            ToolNode instances handle tool-based operations and provide
            their own execution logic.
        publisher: Optional event publisher for execution monitoring.
            Injected via dependency injection if not explicitly provided.
            Used for publishing node execution events and status updates.

    Note:
        The function signature is automatically analyzed to determine
        required parameters and dependency injection points. Parameters
        matching injectable service names will be automatically provided
        by the framework during execution.
    """
    logger.debug(
        "Initializing node '%s' with func=%s",
        name,
        getattr(func, "__name__", type(func).__name__),
    )
    self.name = name
    self.func = func
    self.publisher = publisher
    self.invoke_handler = InvokeNodeHandler(
        name,
        func,
    )

    self.stream_handler = StreamNodeHandler(
        name,
        func,
    )
execute async
execute(config, state, callback_mgr=Inject[CallbackManager])

Execute the node function with comprehensive context and callback support.

Executes the node's function or ToolNode with full dependency injection, callback hook execution, and error handling. This method provides the complete execution environment including state access, configuration, and injected services.

Parameters:

Name Type Description Default
config dict[str, Any]

Configuration dictionary containing execution context, user settings, thread identification, and runtime parameters.

required
state AgentState

Current AgentState providing workflow context, message history, and shared state information accessible to the node function.

required
callback_mgr CallbackManager

Callback manager for executing pre/post execution hooks. Injected via dependency injection if not explicitly provided.

Inject[CallbackManager]

Returns:

Type Description
dict[str, Any] | list[Message]

Either a dictionary containing updated state and execution results,

dict[str, Any] | list[Message]

or a list of Message objects representing the node's output.

dict[str, Any] | list[Message]

The return type depends on the node function's implementation.

Example
# Node function that returns messages
def process_data(state, config):
    result = process(state.data)
    return [Message.text_message(f"Processed: {result}")]


node = Node("processor", process_data)
messages = await node.execute(config, state)
Note

The node function receives dependency-injected parameters based on its signature. Common injectable parameters include 'state', 'config', 'context_manager', 'publisher', and other framework services.

Source code in pyagenity/graph/node.py
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
async def execute(
    self,
    config: dict[str, Any],
    state: AgentState,
    callback_mgr: CallbackManager = Inject[CallbackManager],
) -> dict[str, Any] | list[Message]:
    """Execute the node function with comprehensive context and callback support.

    Executes the node's function or ToolNode with full dependency injection,
    callback hook execution, and error handling. This method provides the
    complete execution environment including state access, configuration,
    and injected services.

    Args:
        config: Configuration dictionary containing execution context,
            user settings, thread identification, and runtime parameters.
        state: Current AgentState providing workflow context, message history,
            and shared state information accessible to the node function.
        callback_mgr: Callback manager for executing pre/post execution hooks.
            Injected via dependency injection if not explicitly provided.

    Returns:
        Either a dictionary containing updated state and execution results,
        or a list of Message objects representing the node's output.
        The return type depends on the node function's implementation.

    Raises:
        Various exceptions depending on node function behavior. All exceptions
        are handled by the callback manager's error handling hooks before
        being propagated.

    Example:
        ```python
        # Node function that returns messages
        def process_data(state, config):
            result = process(state.data)
            return [Message.text_message(f"Processed: {result}")]


        node = Node("processor", process_data)
        messages = await node.execute(config, state)
        ```

    Note:
        The node function receives dependency-injected parameters based on
        its signature. Common injectable parameters include 'state', 'config',
        'context_manager', 'publisher', and other framework services.
    """
    return await self.invoke_handler.invoke(
        config,
        state,
        callback_mgr,
    )
stream async
stream(config, state, callback_mgr=Inject[CallbackManager])

Execute the node function with streaming output support.

Similar to execute() but designed for streaming scenarios where the node function can produce incremental results. This method provides an async iterator interface over the node's outputs, allowing for real-time processing and response streaming.

Parameters:

Name Type Description Default
config dict[str, Any]

Configuration dictionary with execution context and settings.

required
state AgentState

Current AgentState providing workflow context and shared state.

required
callback_mgr CallbackManager

Callback manager for pre/post execution hook handling.

Inject[CallbackManager]

Yields:

Type Description
AsyncIterable[dict[str, Any] | Message]

Dictionary objects or Message instances representing incremental

AsyncIterable[dict[str, Any] | Message]

outputs from the node function. The exact type and frequency of

AsyncIterable[dict[str, Any] | Message]

yields depends on the node function's streaming implementation.

Example
async def streaming_processor(state, config):
    for item in large_dataset:
        result = process_item(item)
        yield Message.text_message(f"Processed item: {result}")


node = Node("stream_processor", streaming_processor)
async for output in node.stream(config, state):
    print(f"Streamed: {output.content}")
Note

Not all node functions support streaming. For non-streaming functions, this method will yield a single result equivalent to calling execute(). The streaming capability is determined by the node function's implementation.

Source code in pyagenity/graph/node.py
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
async def stream(
    self,
    config: dict[str, Any],
    state: AgentState,
    callback_mgr: CallbackManager = Inject[CallbackManager],
) -> AsyncIterable[dict[str, Any] | Message]:
    """Execute the node function with streaming output support.

    Similar to execute() but designed for streaming scenarios where the node
    function can produce incremental results. This method provides an async
    iterator interface over the node's outputs, allowing for real-time
    processing and response streaming.

    Args:
        config: Configuration dictionary with execution context and settings.
        state: Current AgentState providing workflow context and shared state.
        callback_mgr: Callback manager for pre/post execution hook handling.

    Yields:
        Dictionary objects or Message instances representing incremental
        outputs from the node function. The exact type and frequency of
        yields depends on the node function's streaming implementation.

    Example:
        ```python
        async def streaming_processor(state, config):
            for item in large_dataset:
                result = process_item(item)
                yield Message.text_message(f"Processed item: {result}")


        node = Node("stream_processor", streaming_processor)
        async for output in node.stream(config, state):
            print(f"Streamed: {output.content}")
        ```

    Note:
        Not all node functions support streaming. For non-streaming functions,
        this method will yield a single result equivalent to calling execute().
        The streaming capability is determined by the node function's implementation.
    """
    result = self.stream_handler.stream(
        config,
        state,
        callback_mgr,
    )

    async for item in result:
        yield item

state_graph

Classes:

Name Description
StateGraph

Main graph class for orchestrating multi-agent workflows.

Attributes:

Name Type Description
StateT
logger

Attributes

StateT module-attribute
StateT = TypeVar('StateT', bound=AgentState)
logger module-attribute
logger = getLogger(__name__)

Classes

StateGraph

Main graph class for orchestrating multi-agent workflows.

This class provides the core functionality for building and managing stateful agent workflows. It is similar to LangGraph's StateGraph integration with support for dependency injection.

The graph is generic over state types to support custom AgentState subclasses, allowing for type-safe state management throughout the workflow execution.

Attributes:

Name Type Description
state StateT

The current state of the graph workflow.

nodes dict[str, Node]

Collection of nodes in the graph.

edges list[Edge]

Collection of edges connecting nodes.

entry_point str | None

Name of the starting node for execution.

context_manager BaseContextManager[StateT] | None

Optional context manager for handling cross-node state operations.

dependency_container DependencyContainer

Container for managing dependencies that can be injected into node functions.

compiled bool

Whether the graph has been compiled for execution.

Example

graph = StateGraph() graph.add_node("process", process_function) graph.add_edge(START, "process") graph.add_edge("process", END) compiled = graph.compile() result = compiled.invoke({"input": "data"})

Methods:

Name Description
__init__

Initialize a new StateGraph instance.

add_conditional_edges

Add conditional routing between nodes based on runtime evaluation.

add_edge

Add a static edge between two nodes.

add_node

Add a node to the graph.

compile

Compile the graph for execution.

set_entry_point

Set the entry point for the graph.

Source code in pyagenity/graph/state_graph.py
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
class StateGraph[StateT: AgentState]:
    """Main graph class for orchestrating multi-agent workflows.

    This class provides the core functionality for building and managing stateful
    agent workflows. It is similar to LangGraph's StateGraph
    integration with support for dependency injection.

    The graph is generic over state types to support custom AgentState subclasses,
    allowing for type-safe state management throughout the workflow execution.

    Attributes:
        state (StateT): The current state of the graph workflow.
        nodes (dict[str, Node]): Collection of nodes in the graph.
        edges (list[Edge]): Collection of edges connecting nodes.
        entry_point (str | None): Name of the starting node for execution.
        context_manager (BaseContextManager[StateT] | None): Optional context manager
            for handling cross-node state operations.
        dependency_container (DependencyContainer): Container for managing
            dependencies that can be injected into node functions.
        compiled (bool): Whether the graph has been compiled for execution.

    Example:
        >>> graph = StateGraph()
        >>> graph.add_node("process", process_function)
        >>> graph.add_edge(START, "process")
        >>> graph.add_edge("process", END)
        >>> compiled = graph.compile()
        >>> result = compiled.invoke({"input": "data"})
    """

    def __init__(
        self,
        state: StateT | None = None,
        context_manager: BaseContextManager[StateT] | None = None,
        publisher: BasePublisher | None = None,
        id_generator: BaseIDGenerator = DefaultIDGenerator(),
        container: InjectQ | None = None,
        thread_name_generator: Callable[[], str] | None = None,
    ):
        """Initialize a new StateGraph instance.

        Args:
            state: Initial state for the graph. If None, a default AgentState
                will be created.
            context_manager: Optional context manager for handling cross-node
                state operations and advanced state management patterns.
            dependency_container: Container for managing dependencies that can
                be injected into node functions. If None, a new empty container
                will be created.
            publisher: Publisher for emitting events during execution

        Note:
            START and END nodes are automatically added to the graph upon
            initialization and accept the full node signature including
            dependencies.

        Example:
            # Basic usage with default AgentState
            >>> graph = StateGraph()

            # With custom state
            >>> custom_state = MyCustomState()
            >>> graph = StateGraph(custom_state)

            # Or using type hints for clarity
            >>> graph = StateGraph[MyCustomState](MyCustomState())
        """
        logger.info("Initializing StateGraph")
        logger.debug(
            "StateGraph init with state=%s, context_manager=%s",
            type(state).__name__ if state else "default AgentState",
            type(context_manager).__name__ if context_manager else None,
        )

        # State handling
        self._state: StateT = state if state else AgentState()  # type: ignore[assignment]

        # Graph structure
        self.nodes: dict[str, Node] = {}
        self.edges: list[Edge] = []
        self.entry_point: str | None = None

        # Services
        self._publisher: BasePublisher | None = publisher
        self._id_generator: BaseIDGenerator = id_generator
        self._context_manager: BaseContextManager[StateT] | None = context_manager
        self.thread_name_generator = thread_name_generator
        # save container for dependency injection
        # if any container is passed then we will activate that
        # otherwise we can skip it and use the default one
        if container is None:
            self._container = InjectQ.get_instance()
            logger.debug("No container provided, using global singleton instance")
        else:
            logger.debug("Using provided dependency container instance")
            self._container = container
            self._container.activate()

        # Register task_manager, for async tasks
        # This will be used to run background tasks
        self._task_manager = BackgroundTaskManager()

        # now setup the graph
        self._setup()

        # Add START and END nodes (accept full node signature including dependencies)
        logger.debug("Adding default START and END nodes")
        self.nodes[START] = Node(START, lambda state, config, **deps: state, self._publisher)  # type: ignore
        self.nodes[END] = Node(END, lambda state, config, **deps: state, self._publisher)
        logger.debug("StateGraph initialized with %d nodes", len(self.nodes))

    def _setup(self):
        """Setup the graph before compilation.

        This method can be used to perform any necessary setup or validation
        before compiling the graph for execution.
        """
        logger.info("Setting up StateGraph before compilation")
        # Placeholder for any setup logic needed before compilation
        # register dependencies

        # register state and context manager as singletons (these are nullable)
        self._container.bind_instance(
            BaseContextManager,
            self._context_manager,
            allow_none=True,
            allow_concrete=True,
        )
        self._container.bind_instance(
            BasePublisher,
            self._publisher,
            allow_none=True,
            allow_concrete=True,
        )

        # register id generator as factory
        self._container.bind_instance(
            BaseIDGenerator,
            self._id_generator,
            allow_concrete=True,
        )
        self._container.bind("generated_id_type", self._id_generator.id_type)
        # Allow async method also
        self._container.bind_factory(
            "generated_id",
            lambda: self._id_generator.generate(),
        )

        # Attach Thread name generator if provided
        if self.thread_name_generator is None:
            self.thread_name_generator = generate_dummy_thread_name

        generator = self.thread_name_generator or generate_dummy_thread_name

        self._container.bind_factory(
            "generated_thread_name",
            lambda: generator(),
        )

        # Save BackgroundTaskManager
        self._container.bind_instance(
            BackgroundTaskManager,
            self._task_manager,
            allow_concrete=False,
        )

    def add_node(
        self,
        name_or_func: str | Callable,
        func: Union[Callable, "ToolNode", None] = None,
    ) -> "StateGraph":
        """Add a node to the graph.

        This method supports two calling patterns:
        1. Pass a callable as the first argument (name inferred from function name)
        2. Pass a name string and callable/ToolNode as separate arguments

        Args:
            name_or_func: Either the node name (str) or a callable function.
                If callable, the function name will be used as the node name.
            func: The function or ToolNode to execute. Required if name_or_func
                is a string, ignored if name_or_func is callable.

        Returns:
            StateGraph: The graph instance for method chaining.

        Raises:
            ValueError: If invalid arguments are provided.

        Example:
            >>> # Method 1: Function name inferred
            >>> graph.add_node(my_function)
            >>> # Method 2: Explicit name and function
            >>> graph.add_node("process", my_function)
        """
        if callable(name_or_func) and func is None:
            # Function passed as first argument
            name = name_or_func.__name__
            func = name_or_func
            logger.debug("Adding node '%s' with inferred name from function", name)
        elif isinstance(name_or_func, str) and (callable(func) or isinstance(func, ToolNode)):
            # Name and function passed separately
            name = name_or_func
            logger.debug(
                "Adding node '%s' with explicit name and %s",
                name,
                "ToolNode" if isinstance(func, ToolNode) else "callable",
            )
        else:
            error_msg = "Invalid arguments for add_node"
            logger.error(error_msg)
            raise ValueError(error_msg)

        self.nodes[name] = Node(name, func)
        logger.info("Added node '%s' to graph (total nodes: %d)", name, len(self.nodes))
        return self

    def add_edge(
        self,
        from_node: str,
        to_node: str,
    ) -> "StateGraph":
        """Add a static edge between two nodes.

        Creates a direct connection from one node to another. If the source
        node is START, the target node becomes the entry point for the graph.

        Args:
            from_node: Name of the source node.
            to_node: Name of the target node.

        Returns:
            StateGraph: The graph instance for method chaining.

        Example:
            >>> graph.add_edge("node1", "node2")
            >>> graph.add_edge(START, "entry_node")  # Sets entry point
        """
        logger.debug("Adding edge from '%s' to '%s'", from_node, to_node)
        # Set entry point if edge is from START
        if from_node == START:
            self.entry_point = to_node
            logger.info("Set entry point to '%s'", to_node)
        self.edges.append(Edge(from_node, to_node))
        logger.debug("Added edge (total edges: %d)", len(self.edges))
        return self

    def add_conditional_edges(
        self,
        from_node: str,
        condition: Callable,
        path_map: dict[str, str] | None = None,
    ) -> "StateGraph":
        """Add conditional routing between nodes based on runtime evaluation.

        Creates dynamic routing logic where the next node is determined by evaluating
        a condition function against the current state. This enables complex branching
        logic, decision trees, and adaptive workflow routing.

        Args:
            from_node: Name of the source node where the condition is evaluated.
            condition: Callable function that takes the current AgentState and returns
                a value used for routing decisions. Should be deterministic and
                side-effect free.
            path_map: Optional dictionary mapping condition results to destination nodes.
                If provided, the condition's return value is looked up in this mapping.
                If None, the condition should return the destination node name directly.

        Returns:
            StateGraph: The graph instance for method chaining.

        Raises:
            ValueError: If the condition function or path_map configuration is invalid.

        Example:
            ```python
            # Direct routing - condition returns node name
            def route_by_priority(state):
                priority = state.data.get("priority", "normal")
                return "urgent_handler" if priority == "high" else "normal_handler"


            graph.add_conditional_edges("classifier", route_by_priority)


            # Mapped routing - condition result mapped to nodes
            def get_category(state):
                return state.data.get("category", "default")


            category_map = {
                "finance": "finance_processor",
                "legal": "legal_processor",
                "default": "general_processor",
            }
            graph.add_conditional_edges("categorizer", get_category, category_map)
            ```

        Note:
            The condition function receives the current AgentState and should return
            consistent results for the same state. If using path_map, ensure the
            condition's return values match the map keys exactly.
        """
        """Add conditional edges from a node based on a condition function.

        Creates edges that are traversed based on the result of a condition
        function. The condition function receives the current state and should
        return a value that determines which edge to follow.

        Args:
            from_node: Name of the source node.
            condition: Function that evaluates the current state and returns
                a value to determine the next node.
            path_map: Optional mapping from condition results to target nodes.
                If provided, creates multiple conditional edges. If None,
                creates a single conditional edge.

        Returns:
            StateGraph: The graph instance for method chaining.

        Example:
            >>> def route_condition(state):
            ...     return "success" if state.success else "failure"
            >>> graph.add_conditional_edges(
            ...     "processor",
            ...     route_condition,
            ...     {"success": "next_step", "failure": "error_handler"},
            ... )
        """
        # Create edges based on possible returns from condition function
        logger.debug(
            "Node '%s' adding conditional edges with path_map: %s",
            from_node,
            path_map,
        )
        if path_map:
            logger.debug(
                "Node '%s' adding conditional edges with path_map: %s", from_node, path_map
            )
            for condition_result, target_node in path_map.items():
                edge = Edge(from_node, target_node, condition)
                edge.condition_result = condition_result
                self.edges.append(edge)
        else:
            # Single conditional edge
            logger.debug("Node '%s' adding single conditional edge", from_node)
            self.edges.append(Edge(from_node, "", condition))
        return self

    def set_entry_point(self, node_name: str) -> "StateGraph":
        """Set the entry point for the graph."""
        self.entry_point = node_name
        self.add_edge(START, node_name)
        logger.info("Set entry point to '%s'", node_name)
        return self

    def compile(
        self,
        checkpointer: BaseCheckpointer[StateT] | None = None,
        store: BaseStore | None = None,
        interrupt_before: list[str] | None = None,
        interrupt_after: list[str] | None = None,
        callback_manager: CallbackManager = CallbackManager(),
    ) -> "CompiledGraph[StateT]":
        """Compile the graph for execution.

        Args:
            checkpointer: Checkpointer for state persistence
            store: Store for additional data
            debug: Enable debug mode
            interrupt_before: List of node names to interrupt before execution
            interrupt_after: List of node names to interrupt after execution
            callback_manager: Callback manager for executing hooks
        """
        logger.info(
            "Compiling graph with %d nodes, %d edges, entry_point='%s'",
            len(self.nodes),
            len(self.edges),
            self.entry_point,
        )
        logger.debug(
            "Compile options: interrupt_before=%s, interrupt_after=%s",
            interrupt_before,
            interrupt_after,
        )

        if not self.entry_point:
            error_msg = "No entry point set. Use set_entry_point() or add an edge from START."
            logger.error(error_msg)
            raise GraphError(error_msg)

        # Validate graph structure
        logger.debug("Validating graph structure")
        self._validate_graph()
        logger.debug("Graph structure validated successfully")

        # Validate interrupt node names
        interrupt_before = interrupt_before or []
        interrupt_after = interrupt_after or []

        all_interrupt_nodes = set(interrupt_before + interrupt_after)
        invalid_nodes = all_interrupt_nodes - set(self.nodes.keys())
        if invalid_nodes:
            error_msg = f"Invalid interrupt nodes: {invalid_nodes}. Must be existing node names."
            logger.error(error_msg)
            raise GraphError(error_msg)

        self.compiled = True
        logger.info("Graph compilation completed successfully")
        # Import here to avoid circular import at module import time
        # Now update Checkpointer
        if checkpointer is None:
            from pyagenity.checkpointer import InMemoryCheckpointer

            checkpointer = InMemoryCheckpointer[StateT]()
            logger.debug("No checkpointer provided, using InMemoryCheckpointer")

        # Import the CompiledGraph class
        from .compiled_graph import CompiledGraph

        # Setup dependencies
        self._container.bind_instance(
            BaseCheckpointer,
            checkpointer,
            allow_concrete=True,
        )  # not null as we set default
        self._container.bind_instance(
            BaseStore,
            store,
            allow_none=True,
            allow_concrete=True,
        )
        self._container.bind_instance(
            CallbackManager,
            callback_manager,
            allow_concrete=True,
        )  # not null as we set default
        self._container.bind("interrupt_before", interrupt_before)
        self._container.bind("interrupt_after", interrupt_after)
        self._container.bind_instance(StateGraph, self)

        app = CompiledGraph(
            state=self._state,
            interrupt_after=interrupt_after,
            interrupt_before=interrupt_before,
            state_graph=self,
            checkpointer=checkpointer,
            publisher=self._publisher,
            store=store,
            task_manager=self._task_manager,
        )

        self._container.bind(CompiledGraph, app)
        # Compile the Graph, so it will optimize the dependency graph
        self._container.compile()
        return app

    def _validate_graph(self):
        """Validate the graph structure."""
        # Check for orphaned nodes
        connected_nodes = set()
        for edge in self.edges:
            connected_nodes.add(edge.from_node)
            connected_nodes.add(edge.to_node)

        all_nodes = set(self.nodes.keys())
        orphaned = all_nodes - connected_nodes
        if orphaned - {START, END}:  # START and END can be orphaned
            logger.error("Orphaned nodes detected: %s", orphaned - {START, END})
            raise GraphError(f"Orphaned nodes detected: {orphaned - {START, END}}")

        # Check that all edge targets exist
        for edge in self.edges:
            if edge.to_node and edge.to_node not in self.nodes:
                logger.error("Edge '%s' targets non-existent node: %s", edge, edge.to_node)
                raise GraphError(f"Edge targets non-existent node: {edge.to_node}")
Attributes
edges instance-attribute
edges = []
entry_point instance-attribute
entry_point = None
nodes instance-attribute
nodes = {}
thread_name_generator instance-attribute
thread_name_generator = thread_name_generator
Functions
__init__
__init__(state=None, context_manager=None, publisher=None, id_generator=DefaultIDGenerator(), container=None, thread_name_generator=None)

Initialize a new StateGraph instance.

Parameters:

Name Type Description Default
state StateT | None

Initial state for the graph. If None, a default AgentState will be created.

None
context_manager BaseContextManager[StateT] | None

Optional context manager for handling cross-node state operations and advanced state management patterns.

None
dependency_container

Container for managing dependencies that can be injected into node functions. If None, a new empty container will be created.

required
publisher BasePublisher | None

Publisher for emitting events during execution

None
Note

START and END nodes are automatically added to the graph upon initialization and accept the full node signature including dependencies.

Example
Basic usage with default AgentState

graph = StateGraph()

With custom state

custom_state = MyCustomState() graph = StateGraph(custom_state)

Or using type hints for clarity

graph = StateGraphMyCustomState

Source code in pyagenity/graph/state_graph.py
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
def __init__(
    self,
    state: StateT | None = None,
    context_manager: BaseContextManager[StateT] | None = None,
    publisher: BasePublisher | None = None,
    id_generator: BaseIDGenerator = DefaultIDGenerator(),
    container: InjectQ | None = None,
    thread_name_generator: Callable[[], str] | None = None,
):
    """Initialize a new StateGraph instance.

    Args:
        state: Initial state for the graph. If None, a default AgentState
            will be created.
        context_manager: Optional context manager for handling cross-node
            state operations and advanced state management patterns.
        dependency_container: Container for managing dependencies that can
            be injected into node functions. If None, a new empty container
            will be created.
        publisher: Publisher for emitting events during execution

    Note:
        START and END nodes are automatically added to the graph upon
        initialization and accept the full node signature including
        dependencies.

    Example:
        # Basic usage with default AgentState
        >>> graph = StateGraph()

        # With custom state
        >>> custom_state = MyCustomState()
        >>> graph = StateGraph(custom_state)

        # Or using type hints for clarity
        >>> graph = StateGraph[MyCustomState](MyCustomState())
    """
    logger.info("Initializing StateGraph")
    logger.debug(
        "StateGraph init with state=%s, context_manager=%s",
        type(state).__name__ if state else "default AgentState",
        type(context_manager).__name__ if context_manager else None,
    )

    # State handling
    self._state: StateT = state if state else AgentState()  # type: ignore[assignment]

    # Graph structure
    self.nodes: dict[str, Node] = {}
    self.edges: list[Edge] = []
    self.entry_point: str | None = None

    # Services
    self._publisher: BasePublisher | None = publisher
    self._id_generator: BaseIDGenerator = id_generator
    self._context_manager: BaseContextManager[StateT] | None = context_manager
    self.thread_name_generator = thread_name_generator
    # save container for dependency injection
    # if any container is passed then we will activate that
    # otherwise we can skip it and use the default one
    if container is None:
        self._container = InjectQ.get_instance()
        logger.debug("No container provided, using global singleton instance")
    else:
        logger.debug("Using provided dependency container instance")
        self._container = container
        self._container.activate()

    # Register task_manager, for async tasks
    # This will be used to run background tasks
    self._task_manager = BackgroundTaskManager()

    # now setup the graph
    self._setup()

    # Add START and END nodes (accept full node signature including dependencies)
    logger.debug("Adding default START and END nodes")
    self.nodes[START] = Node(START, lambda state, config, **deps: state, self._publisher)  # type: ignore
    self.nodes[END] = Node(END, lambda state, config, **deps: state, self._publisher)
    logger.debug("StateGraph initialized with %d nodes", len(self.nodes))
add_conditional_edges
add_conditional_edges(from_node, condition, path_map=None)

Add conditional routing between nodes based on runtime evaluation.

Creates dynamic routing logic where the next node is determined by evaluating a condition function against the current state. This enables complex branching logic, decision trees, and adaptive workflow routing.

Parameters:

Name Type Description Default
from_node str

Name of the source node where the condition is evaluated.

required
condition Callable

Callable function that takes the current AgentState and returns a value used for routing decisions. Should be deterministic and side-effect free.

required
path_map dict[str, str] | None

Optional dictionary mapping condition results to destination nodes. If provided, the condition's return value is looked up in this mapping. If None, the condition should return the destination node name directly.

None

Returns:

Name Type Description
StateGraph StateGraph

The graph instance for method chaining.

Raises:

Type Description
ValueError

If the condition function or path_map configuration is invalid.

Example
# Direct routing - condition returns node name
def route_by_priority(state):
    priority = state.data.get("priority", "normal")
    return "urgent_handler" if priority == "high" else "normal_handler"


graph.add_conditional_edges("classifier", route_by_priority)


# Mapped routing - condition result mapped to nodes
def get_category(state):
    return state.data.get("category", "default")


category_map = {
    "finance": "finance_processor",
    "legal": "legal_processor",
    "default": "general_processor",
}
graph.add_conditional_edges("categorizer", get_category, category_map)
Note

The condition function receives the current AgentState and should return consistent results for the same state. If using path_map, ensure the condition's return values match the map keys exactly.

Source code in pyagenity/graph/state_graph.py
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
def add_conditional_edges(
    self,
    from_node: str,
    condition: Callable,
    path_map: dict[str, str] | None = None,
) -> "StateGraph":
    """Add conditional routing between nodes based on runtime evaluation.

    Creates dynamic routing logic where the next node is determined by evaluating
    a condition function against the current state. This enables complex branching
    logic, decision trees, and adaptive workflow routing.

    Args:
        from_node: Name of the source node where the condition is evaluated.
        condition: Callable function that takes the current AgentState and returns
            a value used for routing decisions. Should be deterministic and
            side-effect free.
        path_map: Optional dictionary mapping condition results to destination nodes.
            If provided, the condition's return value is looked up in this mapping.
            If None, the condition should return the destination node name directly.

    Returns:
        StateGraph: The graph instance for method chaining.

    Raises:
        ValueError: If the condition function or path_map configuration is invalid.

    Example:
        ```python
        # Direct routing - condition returns node name
        def route_by_priority(state):
            priority = state.data.get("priority", "normal")
            return "urgent_handler" if priority == "high" else "normal_handler"


        graph.add_conditional_edges("classifier", route_by_priority)


        # Mapped routing - condition result mapped to nodes
        def get_category(state):
            return state.data.get("category", "default")


        category_map = {
            "finance": "finance_processor",
            "legal": "legal_processor",
            "default": "general_processor",
        }
        graph.add_conditional_edges("categorizer", get_category, category_map)
        ```

    Note:
        The condition function receives the current AgentState and should return
        consistent results for the same state. If using path_map, ensure the
        condition's return values match the map keys exactly.
    """
    """Add conditional edges from a node based on a condition function.

    Creates edges that are traversed based on the result of a condition
    function. The condition function receives the current state and should
    return a value that determines which edge to follow.

    Args:
        from_node: Name of the source node.
        condition: Function that evaluates the current state and returns
            a value to determine the next node.
        path_map: Optional mapping from condition results to target nodes.
            If provided, creates multiple conditional edges. If None,
            creates a single conditional edge.

    Returns:
        StateGraph: The graph instance for method chaining.

    Example:
        >>> def route_condition(state):
        ...     return "success" if state.success else "failure"
        >>> graph.add_conditional_edges(
        ...     "processor",
        ...     route_condition,
        ...     {"success": "next_step", "failure": "error_handler"},
        ... )
    """
    # Create edges based on possible returns from condition function
    logger.debug(
        "Node '%s' adding conditional edges with path_map: %s",
        from_node,
        path_map,
    )
    if path_map:
        logger.debug(
            "Node '%s' adding conditional edges with path_map: %s", from_node, path_map
        )
        for condition_result, target_node in path_map.items():
            edge = Edge(from_node, target_node, condition)
            edge.condition_result = condition_result
            self.edges.append(edge)
    else:
        # Single conditional edge
        logger.debug("Node '%s' adding single conditional edge", from_node)
        self.edges.append(Edge(from_node, "", condition))
    return self
add_edge
add_edge(from_node, to_node)

Add a static edge between two nodes.

Creates a direct connection from one node to another. If the source node is START, the target node becomes the entry point for the graph.

Parameters:

Name Type Description Default
from_node str

Name of the source node.

required
to_node str

Name of the target node.

required

Returns:

Name Type Description
StateGraph StateGraph

The graph instance for method chaining.

Example

graph.add_edge("node1", "node2") graph.add_edge(START, "entry_node") # Sets entry point

Source code in pyagenity/graph/state_graph.py
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
def add_edge(
    self,
    from_node: str,
    to_node: str,
) -> "StateGraph":
    """Add a static edge between two nodes.

    Creates a direct connection from one node to another. If the source
    node is START, the target node becomes the entry point for the graph.

    Args:
        from_node: Name of the source node.
        to_node: Name of the target node.

    Returns:
        StateGraph: The graph instance for method chaining.

    Example:
        >>> graph.add_edge("node1", "node2")
        >>> graph.add_edge(START, "entry_node")  # Sets entry point
    """
    logger.debug("Adding edge from '%s' to '%s'", from_node, to_node)
    # Set entry point if edge is from START
    if from_node == START:
        self.entry_point = to_node
        logger.info("Set entry point to '%s'", to_node)
    self.edges.append(Edge(from_node, to_node))
    logger.debug("Added edge (total edges: %d)", len(self.edges))
    return self
add_node
add_node(name_or_func, func=None)

Add a node to the graph.

This method supports two calling patterns: 1. Pass a callable as the first argument (name inferred from function name) 2. Pass a name string and callable/ToolNode as separate arguments

Parameters:

Name Type Description Default
name_or_func str | Callable

Either the node name (str) or a callable function. If callable, the function name will be used as the node name.

required
func Union[Callable, ToolNode, None]

The function or ToolNode to execute. Required if name_or_func is a string, ignored if name_or_func is callable.

None

Returns:

Name Type Description
StateGraph StateGraph

The graph instance for method chaining.

Raises:

Type Description
ValueError

If invalid arguments are provided.

Example
Method 1: Function name inferred

graph.add_node(my_function)

Method 2: Explicit name and function

graph.add_node("process", my_function)

Source code in pyagenity/graph/state_graph.py
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
def add_node(
    self,
    name_or_func: str | Callable,
    func: Union[Callable, "ToolNode", None] = None,
) -> "StateGraph":
    """Add a node to the graph.

    This method supports two calling patterns:
    1. Pass a callable as the first argument (name inferred from function name)
    2. Pass a name string and callable/ToolNode as separate arguments

    Args:
        name_or_func: Either the node name (str) or a callable function.
            If callable, the function name will be used as the node name.
        func: The function or ToolNode to execute. Required if name_or_func
            is a string, ignored if name_or_func is callable.

    Returns:
        StateGraph: The graph instance for method chaining.

    Raises:
        ValueError: If invalid arguments are provided.

    Example:
        >>> # Method 1: Function name inferred
        >>> graph.add_node(my_function)
        >>> # Method 2: Explicit name and function
        >>> graph.add_node("process", my_function)
    """
    if callable(name_or_func) and func is None:
        # Function passed as first argument
        name = name_or_func.__name__
        func = name_or_func
        logger.debug("Adding node '%s' with inferred name from function", name)
    elif isinstance(name_or_func, str) and (callable(func) or isinstance(func, ToolNode)):
        # Name and function passed separately
        name = name_or_func
        logger.debug(
            "Adding node '%s' with explicit name and %s",
            name,
            "ToolNode" if isinstance(func, ToolNode) else "callable",
        )
    else:
        error_msg = "Invalid arguments for add_node"
        logger.error(error_msg)
        raise ValueError(error_msg)

    self.nodes[name] = Node(name, func)
    logger.info("Added node '%s' to graph (total nodes: %d)", name, len(self.nodes))
    return self
compile
compile(checkpointer=None, store=None, interrupt_before=None, interrupt_after=None, callback_manager=CallbackManager())

Compile the graph for execution.

Parameters:

Name Type Description Default
checkpointer BaseCheckpointer[StateT] | None

Checkpointer for state persistence

None
store BaseStore | None

Store for additional data

None
debug

Enable debug mode

required
interrupt_before list[str] | None

List of node names to interrupt before execution

None
interrupt_after list[str] | None

List of node names to interrupt after execution

None
callback_manager CallbackManager

Callback manager for executing hooks

CallbackManager()
Source code in pyagenity/graph/state_graph.py
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
def compile(
    self,
    checkpointer: BaseCheckpointer[StateT] | None = None,
    store: BaseStore | None = None,
    interrupt_before: list[str] | None = None,
    interrupt_after: list[str] | None = None,
    callback_manager: CallbackManager = CallbackManager(),
) -> "CompiledGraph[StateT]":
    """Compile the graph for execution.

    Args:
        checkpointer: Checkpointer for state persistence
        store: Store for additional data
        debug: Enable debug mode
        interrupt_before: List of node names to interrupt before execution
        interrupt_after: List of node names to interrupt after execution
        callback_manager: Callback manager for executing hooks
    """
    logger.info(
        "Compiling graph with %d nodes, %d edges, entry_point='%s'",
        len(self.nodes),
        len(self.edges),
        self.entry_point,
    )
    logger.debug(
        "Compile options: interrupt_before=%s, interrupt_after=%s",
        interrupt_before,
        interrupt_after,
    )

    if not self.entry_point:
        error_msg = "No entry point set. Use set_entry_point() or add an edge from START."
        logger.error(error_msg)
        raise GraphError(error_msg)

    # Validate graph structure
    logger.debug("Validating graph structure")
    self._validate_graph()
    logger.debug("Graph structure validated successfully")

    # Validate interrupt node names
    interrupt_before = interrupt_before or []
    interrupt_after = interrupt_after or []

    all_interrupt_nodes = set(interrupt_before + interrupt_after)
    invalid_nodes = all_interrupt_nodes - set(self.nodes.keys())
    if invalid_nodes:
        error_msg = f"Invalid interrupt nodes: {invalid_nodes}. Must be existing node names."
        logger.error(error_msg)
        raise GraphError(error_msg)

    self.compiled = True
    logger.info("Graph compilation completed successfully")
    # Import here to avoid circular import at module import time
    # Now update Checkpointer
    if checkpointer is None:
        from pyagenity.checkpointer import InMemoryCheckpointer

        checkpointer = InMemoryCheckpointer[StateT]()
        logger.debug("No checkpointer provided, using InMemoryCheckpointer")

    # Import the CompiledGraph class
    from .compiled_graph import CompiledGraph

    # Setup dependencies
    self._container.bind_instance(
        BaseCheckpointer,
        checkpointer,
        allow_concrete=True,
    )  # not null as we set default
    self._container.bind_instance(
        BaseStore,
        store,
        allow_none=True,
        allow_concrete=True,
    )
    self._container.bind_instance(
        CallbackManager,
        callback_manager,
        allow_concrete=True,
    )  # not null as we set default
    self._container.bind("interrupt_before", interrupt_before)
    self._container.bind("interrupt_after", interrupt_after)
    self._container.bind_instance(StateGraph, self)

    app = CompiledGraph(
        state=self._state,
        interrupt_after=interrupt_after,
        interrupt_before=interrupt_before,
        state_graph=self,
        checkpointer=checkpointer,
        publisher=self._publisher,
        store=store,
        task_manager=self._task_manager,
    )

    self._container.bind(CompiledGraph, app)
    # Compile the Graph, so it will optimize the dependency graph
    self._container.compile()
    return app
set_entry_point
set_entry_point(node_name)

Set the entry point for the graph.

Source code in pyagenity/graph/state_graph.py
381
382
383
384
385
386
def set_entry_point(self, node_name: str) -> "StateGraph":
    """Set the entry point for the graph."""
    self.entry_point = node_name
    self.add_edge(START, node_name)
    logger.info("Set entry point to '%s'", node_name)
    return self

Functions

tool_node

ToolNode package.

This package provides a modularized implementation of ToolNode. Public API:

  • ToolNode
  • HAS_FASTMCP, HAS_MCP

Backwards-compatible import path: from pyagenity.graph.tool_node import ToolNode

Modules:

Name Description
base

Tool execution node for PyAgenity graph workflows.

constants

Constants for ToolNode package.

deps

Dependency flags and optional imports for ToolNode.

executors

Executors for different tool providers and local functions.

schema

Schema utilities and local tool description building for ToolNode.

Classes:

Name Description
ToolNode

A unified registry and executor for callable functions from various tool providers.

Attributes:

Name Type Description
HAS_FASTMCP
HAS_MCP

Attributes

HAS_FASTMCP module-attribute
HAS_FASTMCP = True
HAS_MCP module-attribute
HAS_MCP = True
__all__ module-attribute
__all__ = ['HAS_FASTMCP', 'HAS_MCP', 'ToolNode']

Classes

ToolNode

Bases: SchemaMixin, LocalExecMixin, MCPMixin, ComposioMixin, LangChainMixin, KwargsResolverMixin

A unified registry and executor for callable functions from various tool providers.

ToolNode serves as the central hub for managing and executing tools from multiple sources: - Local Python functions - MCP (Model Context Protocol) tools - Composio adapter tools - LangChain tools

The class uses a mixin-based architecture to separate concerns and maintain clean integration with different tool providers. It provides both synchronous and asynchronous execution methods with comprehensive event publishing and error handling.

Attributes:

Name Type Description
_funcs dict[str, Callable]

Dictionary mapping function names to callable functions.

_client Client | None

Optional MCP client for remote tool execution.

_composio ComposioAdapter | None

Optional Composio adapter for external integrations.

_langchain Any | None

Optional LangChain adapter for LangChain tools.

mcp_tools list[str]

List of available MCP tool names.

composio_tools list[str]

List of available Composio tool names.

langchain_tools list[str]

List of available LangChain tool names.

Example
# Define local tools
def weather_tool(location: str) -> str:
    return f"Weather in {location}: Sunny, 25°C"


def calculator(a: int, b: int) -> int:
    return a + b


# Create ToolNode with local functions
tools = ToolNode([weather_tool, calculator])

# Execute a tool
result = await tools.invoke(
    name="weather_tool",
    args={"location": "New York"},
    tool_call_id="call_123",
    config={"user_id": "user1"},
    state=agent_state,
)

Methods:

Name Description
__init__

Initialize ToolNode with functions and optional tool adapters.

all_tools

Get all available tools from all configured providers.

all_tools_sync

Synchronously get all available tools from all configured providers.

get_local_tool

Generate OpenAI-compatible tool definitions for all registered local functions.

invoke

Execute a specific tool by name with the provided arguments.

stream

Execute a tool with streaming support, yielding incremental results.

Source code in pyagenity/graph/tool_node/base.py
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
class ToolNode(
    SchemaMixin,
    LocalExecMixin,
    MCPMixin,
    ComposioMixin,
    LangChainMixin,
    KwargsResolverMixin,
):
    """A unified registry and executor for callable functions from various tool providers.

    ToolNode serves as the central hub for managing and executing tools from multiple sources:
    - Local Python functions
    - MCP (Model Context Protocol) tools
    - Composio adapter tools
    - LangChain tools

    The class uses a mixin-based architecture to separate concerns and maintain clean
    integration with different tool providers. It provides both synchronous and asynchronous
    execution methods with comprehensive event publishing and error handling.

    Attributes:
        _funcs: Dictionary mapping function names to callable functions.
        _client: Optional MCP client for remote tool execution.
        _composio: Optional Composio adapter for external integrations.
        _langchain: Optional LangChain adapter for LangChain tools.
        mcp_tools: List of available MCP tool names.
        composio_tools: List of available Composio tool names.
        langchain_tools: List of available LangChain tool names.

    Example:
        ```python
        # Define local tools
        def weather_tool(location: str) -> str:
            return f"Weather in {location}: Sunny, 25°C"


        def calculator(a: int, b: int) -> int:
            return a + b


        # Create ToolNode with local functions
        tools = ToolNode([weather_tool, calculator])

        # Execute a tool
        result = await tools.invoke(
            name="weather_tool",
            args={"location": "New York"},
            tool_call_id="call_123",
            config={"user_id": "user1"},
            state=agent_state,
        )
        ```
    """

    def __init__(
        self,
        functions: t.Iterable[t.Callable],
        client: deps.Client | None = None,  # type: ignore
        composio_adapter: ComposioAdapter | None = None,
        langchain_adapter: t.Any | None = None,
    ) -> None:
        """Initialize ToolNode with functions and optional tool adapters.

        Args:
            functions: Iterable of callable functions to register as tools. Each function
                will be registered with its `__name__` as the tool identifier.
            client: Optional MCP (Model Context Protocol) client for remote tool access.
                Requires 'fastmcp' and 'mcp' packages to be installed.
            composio_adapter: Optional Composio adapter for external integrations and
                third-party API access.
            langchain_adapter: Optional LangChain adapter for accessing LangChain tools
                and integrations.

        Raises:
            ImportError: If MCP client is provided but required packages are not installed.
            TypeError: If any item in functions is not callable.

        Note:
            When using MCP client functionality, ensure you have installed the required
            dependencies with: `pip install pyagenity[mcp]`
        """
        logger.info("Initializing ToolNode with %d functions", len(list(functions)))

        if client is not None:
            # Read flags dynamically so tests can patch pyagenity.graph.tool_node.HAS_*
            mod = sys.modules.get("pyagenity.graph.tool_node")
            has_fastmcp = getattr(mod, "HAS_FASTMCP", deps.HAS_FASTMCP) if mod else deps.HAS_FASTMCP
            has_mcp = getattr(mod, "HAS_MCP", deps.HAS_MCP) if mod else deps.HAS_MCP

            if not has_fastmcp or not has_mcp:
                raise ImportError(
                    "MCP client functionality requires 'fastmcp' and 'mcp' packages. "
                    "Install with: pip install pyagenity[mcp]"
                )
            logger.debug("ToolNode initialized with MCP client")

        self._funcs: dict[str, t.Callable] = {}
        self._client: deps.Client | None = client  # type: ignore
        self._composio: ComposioAdapter | None = composio_adapter
        self._langchain: t.Any | None = langchain_adapter

        for fn in functions:
            if not callable(fn):
                raise TypeError("ToolNode only accepts callables")
            self._funcs[fn.__name__] = fn

        self.mcp_tools: list[str] = []
        self.composio_tools: list[str] = []
        self.langchain_tools: list[str] = []

    async def _all_tools_async(self) -> list[dict]:
        tools: list[dict] = self.get_local_tool()
        tools.extend(await self._get_mcp_tool())
        tools.extend(await self._get_composio_tools())
        tools.extend(await self._get_langchain_tools())
        return tools

    async def all_tools(self) -> list[dict]:
        """Get all available tools from all configured providers.

        Retrieves and combines tool definitions from local functions, MCP client,
        Composio adapter, and LangChain adapter. Each tool definition includes
        the function schema with parameters and descriptions.

        Returns:
            List of tool definitions in OpenAI function calling format. Each dict
            contains 'type': 'function' and 'function' with name, description,
            and parameters schema.

        Example:
            ```python
            tools = await tool_node.all_tools()
            # Returns:
            # [
            #   {
            #     "type": "function",
            #     "function": {
            #       "name": "weather_tool",
            #       "description": "Get weather information for a location",
            #       "parameters": {
            #         "type": "object",
            #         "properties": {
            #           "location": {"type": "string"}
            #         },
            #         "required": ["location"]
            #       }
            #     }
            #   }
            # ]
            ```
        """
        return await self._all_tools_async()

    def all_tools_sync(self) -> list[dict]:
        """Synchronously get all available tools from all configured providers.

        This is a synchronous wrapper around the async all_tools() method.
        It uses asyncio.run() to handle async operations from MCP, Composio,
        and LangChain adapters.

        Returns:
            List of tool definitions in OpenAI function calling format.

        Note:
            Prefer using the async `all_tools()` method when possible, especially
            in async contexts, to avoid potential event loop issues.
        """
        tools: list[dict] = self.get_local_tool()
        if self._client:
            result = asyncio.run(self._get_mcp_tool())
            if result:
                tools.extend(result)
        comp = asyncio.run(self._get_composio_tools())
        if comp:
            tools.extend(comp)
        lc = asyncio.run(self._get_langchain_tools())
        if lc:
            tools.extend(lc)
        return tools

    async def invoke(  # noqa: PLR0915
        self,
        name: str,
        args: dict,
        tool_call_id: str,
        config: dict[str, t.Any],
        state: AgentState,
        callback_manager: CallbackManager = Inject[CallbackManager],
    ) -> t.Any:
        """Execute a specific tool by name with the provided arguments.

        This method handles tool execution across all configured providers (local,
        MCP, Composio, LangChain) with comprehensive error handling, event publishing,
        and callback management.

        Args:
            name: The name of the tool to execute.
            args: Dictionary of arguments to pass to the tool function.
            tool_call_id: Unique identifier for this tool execution, used for
                tracking and result correlation.
            config: Configuration dictionary containing execution context and
                user-specific settings.
            state: Current agent state for context-aware tool execution.
            callback_manager: Manager for executing pre/post execution callbacks.
                Injected via dependency injection if not provided.

        Returns:
            Message object containing tool execution results, either successful
            output or error information with appropriate status indicators.

        Raises:
            The method handles all exceptions internally and returns error Messages
            rather than raising exceptions, ensuring robust execution flow.

        Example:
            ```python
            result = await tool_node.invoke(
                name="weather_tool",
                args={"location": "Paris", "units": "metric"},
                tool_call_id="call_abc123",
                config={"user_id": "user1", "session_id": "session1"},
                state=current_agent_state,
            )

            # result is a Message with tool execution results
            print(result.content)  # Tool output or error information
            ```

        Note:
            The method publishes execution events throughout the process for
            monitoring and debugging purposes. Tool execution is routed based
            on tool provider precedence: MCP → Composio → LangChain → Local.
        """
        logger.info("Executing tool '%s' with %d arguments", name, len(args))
        logger.debug("Tool arguments: %s", args)

        event = EventModel.default(
            config,
            data={"args": args, "tool_call_id": tool_call_id, "function_name": name},
            content_type=[ContentType.TOOL_CALL],
            event=Event.TOOL_EXECUTION,
        )
        event.node_name = name
        # Attach structured tool call block
        with contextlib.suppress(Exception):
            event.content_blocks = [ToolCallBlock(id=tool_call_id, name=name, args=args)]
        publish_event(event)

        if name in self.mcp_tools:
            event.metadata["is_mcp"] = True
            publish_event(event)
            res = await self._mcp_execute(
                name,
                args,
                tool_call_id,
                config,
                callback_manager,
            )
            event.data["message"] = res.model_dump()
            # Attach tool result block mirroring the tool output
            with contextlib.suppress(Exception):
                event.content_blocks = [
                    ToolResultBlock(call_id=tool_call_id, output=res.model_dump())
                ]
            event.event_type = EventType.END
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            return res

        if name in self.composio_tools:
            event.metadata["is_composio"] = True
            publish_event(event)
            res = await self._composio_execute(
                name,
                args,
                tool_call_id,
                config,
                callback_manager,
            )
            event.data["message"] = res.model_dump()
            with contextlib.suppress(Exception):
                event.content_blocks = [
                    ToolResultBlock(call_id=tool_call_id, output=res.model_dump())
                ]
            event.event_type = EventType.END
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            return res

        if name in self.langchain_tools:
            event.metadata["is_langchain"] = True
            publish_event(event)
            res = await self._langchain_execute(
                name,
                args,
                tool_call_id,
                config,
                callback_manager,
            )
            event.data["message"] = res.model_dump()
            with contextlib.suppress(Exception):
                event.content_blocks = [
                    ToolResultBlock(call_id=tool_call_id, output=res.model_dump())
                ]
            event.event_type = EventType.END
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            return res

        if name in self._funcs:
            event.metadata["is_mcp"] = False
            publish_event(event)
            res = await self._internal_execute(
                name,
                args,
                tool_call_id,
                config,
                state,
                callback_manager,
            )
            event.data["message"] = res.model_dump()
            with contextlib.suppress(Exception):
                event.content_blocks = [
                    ToolResultBlock(call_id=tool_call_id, output=res.model_dump())
                ]
            event.event_type = EventType.END
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            return res

        error_msg = f"Tool '{name}' not found."
        event.data["error"] = error_msg
        event.event_type = EventType.ERROR
        event.content_type = [ContentType.TOOL_RESULT, ContentType.ERROR]
        publish_event(event)
        return Message.tool_message(
            content=[
                ErrorBlock(message=error_msg),
                ToolResultBlock(
                    call_id=tool_call_id,
                    output=error_msg,
                    is_error=True,
                    status="failed",
                ),
            ],
        )

    async def stream(  # noqa: PLR0915
        self,
        name: str,
        args: dict,
        tool_call_id: str,
        config: dict[str, t.Any],
        state: AgentState,
        callback_manager: CallbackManager = Inject[CallbackManager],
    ) -> t.AsyncIterator[Message]:
        """Execute a tool with streaming support, yielding incremental results.

        Similar to invoke() but designed for tools that can provide streaming responses
        or when you want to process results as they become available. Currently,
        most tool providers return complete results, so this method typically yields
        a single Message with the full result.

        Args:
            name: The name of the tool to execute.
            args: Dictionary of arguments to pass to the tool function.
            tool_call_id: Unique identifier for this tool execution.
            config: Configuration dictionary containing execution context.
            state: Current agent state for context-aware tool execution.
            callback_manager: Manager for executing pre/post execution callbacks.

        Yields:
            Message objects containing tool execution results or status updates.
            For most tools, this will yield a single complete result Message.

        Example:
            ```python
            async for message in tool_node.stream(
                name="data_processor",
                args={"dataset": "large_data.csv"},
                tool_call_id="call_stream123",
                config={"user_id": "user1"},
                state=current_state,
            ):
                print(f"Received: {message.content}")
                # Process each streamed result
            ```

        Note:
            The streaming interface is designed for future expansion where tools
            may provide true streaming responses. Currently, it provides a
            consistent async iterator interface over tool results.
        """
        logger.info("Executing tool '%s' with %d arguments", name, len(args))
        logger.debug("Tool arguments: %s", args)
        event = EventModel.default(
            config,
            data={"args": args, "tool_call_id": tool_call_id, "function_name": name},
            content_type=[ContentType.TOOL_CALL],
            event=Event.TOOL_EXECUTION,
        )
        event.node_name = "ToolNode"
        with contextlib.suppress(Exception):
            event.content_blocks = [ToolCallBlock(id=tool_call_id, name=name, args=args)]

        if name in self.mcp_tools:
            event.metadata["function_type"] = "mcp"
            publish_event(event)
            message = await self._mcp_execute(
                name,
                args,
                tool_call_id,
                config,
                callback_manager,
            )
            event.data["message"] = message.model_dump()
            with contextlib.suppress(Exception):
                event.content_blocks = [
                    ToolResultBlock(call_id=tool_call_id, output=message.model_dump())
                ]
            event.event_type = EventType.END
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            yield message
            return

        if name in self.composio_tools:
            event.metadata["function_type"] = "composio"
            publish_event(event)
            message = await self._composio_execute(
                name,
                args,
                tool_call_id,
                config,
                callback_manager,
            )
            event.data["message"] = message.model_dump()
            with contextlib.suppress(Exception):
                event.content_blocks = [
                    ToolResultBlock(call_id=tool_call_id, output=message.model_dump())
                ]
            event.event_type = EventType.END
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            yield message
            return

        if name in self.langchain_tools:
            event.metadata["function_type"] = "langchain"
            publish_event(event)
            message = await self._langchain_execute(
                name,
                args,
                tool_call_id,
                config,
                callback_manager,
            )
            event.data["message"] = message.model_dump()
            with contextlib.suppress(Exception):
                event.content_blocks = [
                    ToolResultBlock(call_id=tool_call_id, output=message.model_dump())
                ]
            event.event_type = EventType.END
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            yield message
            return

        if name in self._funcs:
            event.metadata["function_type"] = "internal"
            publish_event(event)

            result = await self._internal_execute(
                name,
                args,
                tool_call_id,
                config,
                state,
                callback_manager,
            )
            event.data["message"] = result.model_dump()
            with contextlib.suppress(Exception):
                event.content_blocks = [
                    ToolResultBlock(call_id=tool_call_id, output=result.model_dump())
                ]
            event.event_type = EventType.END
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            yield result
            return

        error_msg = f"Tool '{name}' not found."
        event.data["error"] = error_msg
        event.event_type = EventType.ERROR
        event.content_type = [ContentType.TOOL_RESULT, ContentType.ERROR]
        publish_event(event)

        yield Message.tool_message(
            content=[
                ErrorBlock(message=error_msg),
                ToolResultBlock(
                    call_id=tool_call_id,
                    output=error_msg,
                    is_error=True,
                    status="failed",
                ),
            ],
        )
Attributes
composio_tools instance-attribute
composio_tools = []
langchain_tools instance-attribute
langchain_tools = []
mcp_tools instance-attribute
mcp_tools = []
Functions
__init__
__init__(functions, client=None, composio_adapter=None, langchain_adapter=None)

Initialize ToolNode with functions and optional tool adapters.

Parameters:

Name Type Description Default
functions Iterable[Callable]

Iterable of callable functions to register as tools. Each function will be registered with its __name__ as the tool identifier.

required
client Client | None

Optional MCP (Model Context Protocol) client for remote tool access. Requires 'fastmcp' and 'mcp' packages to be installed.

None
composio_adapter ComposioAdapter | None

Optional Composio adapter for external integrations and third-party API access.

None
langchain_adapter Any | None

Optional LangChain adapter for accessing LangChain tools and integrations.

None

Raises:

Type Description
ImportError

If MCP client is provided but required packages are not installed.

TypeError

If any item in functions is not callable.

Note

When using MCP client functionality, ensure you have installed the required dependencies with: pip install pyagenity[mcp]

Source code in pyagenity/graph/tool_node/base.py
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
def __init__(
    self,
    functions: t.Iterable[t.Callable],
    client: deps.Client | None = None,  # type: ignore
    composio_adapter: ComposioAdapter | None = None,
    langchain_adapter: t.Any | None = None,
) -> None:
    """Initialize ToolNode with functions and optional tool adapters.

    Args:
        functions: Iterable of callable functions to register as tools. Each function
            will be registered with its `__name__` as the tool identifier.
        client: Optional MCP (Model Context Protocol) client for remote tool access.
            Requires 'fastmcp' and 'mcp' packages to be installed.
        composio_adapter: Optional Composio adapter for external integrations and
            third-party API access.
        langchain_adapter: Optional LangChain adapter for accessing LangChain tools
            and integrations.

    Raises:
        ImportError: If MCP client is provided but required packages are not installed.
        TypeError: If any item in functions is not callable.

    Note:
        When using MCP client functionality, ensure you have installed the required
        dependencies with: `pip install pyagenity[mcp]`
    """
    logger.info("Initializing ToolNode with %d functions", len(list(functions)))

    if client is not None:
        # Read flags dynamically so tests can patch pyagenity.graph.tool_node.HAS_*
        mod = sys.modules.get("pyagenity.graph.tool_node")
        has_fastmcp = getattr(mod, "HAS_FASTMCP", deps.HAS_FASTMCP) if mod else deps.HAS_FASTMCP
        has_mcp = getattr(mod, "HAS_MCP", deps.HAS_MCP) if mod else deps.HAS_MCP

        if not has_fastmcp or not has_mcp:
            raise ImportError(
                "MCP client functionality requires 'fastmcp' and 'mcp' packages. "
                "Install with: pip install pyagenity[mcp]"
            )
        logger.debug("ToolNode initialized with MCP client")

    self._funcs: dict[str, t.Callable] = {}
    self._client: deps.Client | None = client  # type: ignore
    self._composio: ComposioAdapter | None = composio_adapter
    self._langchain: t.Any | None = langchain_adapter

    for fn in functions:
        if not callable(fn):
            raise TypeError("ToolNode only accepts callables")
        self._funcs[fn.__name__] = fn

    self.mcp_tools: list[str] = []
    self.composio_tools: list[str] = []
    self.langchain_tools: list[str] = []
all_tools async
all_tools()

Get all available tools from all configured providers.

Retrieves and combines tool definitions from local functions, MCP client, Composio adapter, and LangChain adapter. Each tool definition includes the function schema with parameters and descriptions.

Returns:

Type Description
list[dict]

List of tool definitions in OpenAI function calling format. Each dict

list[dict]

contains 'type': 'function' and 'function' with name, description,

list[dict]

and parameters schema.

Example
tools = await tool_node.all_tools()
# Returns:
# [
#   {
#     "type": "function",
#     "function": {
#       "name": "weather_tool",
#       "description": "Get weather information for a location",
#       "parameters": {
#         "type": "object",
#         "properties": {
#           "location": {"type": "string"}
#         },
#         "required": ["location"]
#       }
#     }
#   }
# ]
Source code in pyagenity/graph/tool_node/base.py
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
async def all_tools(self) -> list[dict]:
    """Get all available tools from all configured providers.

    Retrieves and combines tool definitions from local functions, MCP client,
    Composio adapter, and LangChain adapter. Each tool definition includes
    the function schema with parameters and descriptions.

    Returns:
        List of tool definitions in OpenAI function calling format. Each dict
        contains 'type': 'function' and 'function' with name, description,
        and parameters schema.

    Example:
        ```python
        tools = await tool_node.all_tools()
        # Returns:
        # [
        #   {
        #     "type": "function",
        #     "function": {
        #       "name": "weather_tool",
        #       "description": "Get weather information for a location",
        #       "parameters": {
        #         "type": "object",
        #         "properties": {
        #           "location": {"type": "string"}
        #         },
        #         "required": ["location"]
        #       }
        #     }
        #   }
        # ]
        ```
    """
    return await self._all_tools_async()
all_tools_sync
all_tools_sync()

Synchronously get all available tools from all configured providers.

This is a synchronous wrapper around the async all_tools() method. It uses asyncio.run() to handle async operations from MCP, Composio, and LangChain adapters.

Returns:

Type Description
list[dict]

List of tool definitions in OpenAI function calling format.

Note

Prefer using the async all_tools() method when possible, especially in async contexts, to avoid potential event loop issues.

Source code in pyagenity/graph/tool_node/base.py
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
def all_tools_sync(self) -> list[dict]:
    """Synchronously get all available tools from all configured providers.

    This is a synchronous wrapper around the async all_tools() method.
    It uses asyncio.run() to handle async operations from MCP, Composio,
    and LangChain adapters.

    Returns:
        List of tool definitions in OpenAI function calling format.

    Note:
        Prefer using the async `all_tools()` method when possible, especially
        in async contexts, to avoid potential event loop issues.
    """
    tools: list[dict] = self.get_local_tool()
    if self._client:
        result = asyncio.run(self._get_mcp_tool())
        if result:
            tools.extend(result)
    comp = asyncio.run(self._get_composio_tools())
    if comp:
        tools.extend(comp)
    lc = asyncio.run(self._get_langchain_tools())
    if lc:
        tools.extend(lc)
    return tools
get_local_tool
get_local_tool()

Generate OpenAI-compatible tool definitions for all registered local functions.

Inspects all registered functions in _funcs and automatically generates tool schemas by analyzing function signatures, type annotations, and docstrings. Excludes injectable parameters that are provided by the framework.

Returns:

Type Description
list[dict]

List of tool definitions in OpenAI function calling format. Each

list[dict]

definition includes the function name, description (from docstring),

list[dict]

and complete parameter schema with types and required fields.

Example

For a function:

def calculate(a: int, b: int, operation: str = "add") -> int:
    '''Perform arithmetic calculation.'''
    return a + b if operation == "add" else a - b

Returns:

[
    {
        "type": "function",
        "function": {
            "name": "calculate",
            "description": "Perform arithmetic calculation.",
            "parameters": {
                "type": "object",
                "properties": {
                    "a": {"type": "integer"},
                    "b": {"type": "integer"},
                    "operation": {"type": "string", "default": "add"},
                },
                "required": ["a", "b"],
            },
        },
    }
]

Note

Parameters listed in INJECTABLE_PARAMS (like 'state', 'config', 'tool_call_id') are automatically excluded from the generated schema as they are provided by the framework during execution.

Source code in pyagenity/graph/tool_node/schema.py
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
def get_local_tool(self) -> list[dict]:
    """Generate OpenAI-compatible tool definitions for all registered local functions.

    Inspects all registered functions in _funcs and automatically generates
    tool schemas by analyzing function signatures, type annotations, and docstrings.
    Excludes injectable parameters that are provided by the framework.

    Returns:
        List of tool definitions in OpenAI function calling format. Each
        definition includes the function name, description (from docstring),
        and complete parameter schema with types and required fields.

    Example:
        For a function:
        ```python
        def calculate(a: int, b: int, operation: str = "add") -> int:
            '''Perform arithmetic calculation.'''
            return a + b if operation == "add" else a - b
        ```

        Returns:
        ```python
        [
            {
                "type": "function",
                "function": {
                    "name": "calculate",
                    "description": "Perform arithmetic calculation.",
                    "parameters": {
                        "type": "object",
                        "properties": {
                            "a": {"type": "integer"},
                            "b": {"type": "integer"},
                            "operation": {"type": "string", "default": "add"},
                        },
                        "required": ["a", "b"],
                    },
                },
            }
        ]
        ```

    Note:
        Parameters listed in INJECTABLE_PARAMS (like 'state', 'config',
        'tool_call_id') are automatically excluded from the generated schema
        as they are provided by the framework during execution.
    """
    tools: list[dict] = []
    for name, fn in self._funcs.items():
        sig = inspect.signature(fn)
        params_schema: dict = {"type": "object", "properties": {}, "required": []}

        for p_name, p in sig.parameters.items():
            if p.kind in (
                inspect.Parameter.VAR_POSITIONAL,
                inspect.Parameter.VAR_KEYWORD,
            ):
                continue

            if p_name in INJECTABLE_PARAMS:
                continue

            annotation = p.annotation if p.annotation is not inspect._empty else str
            prop = SchemaMixin._annotation_to_schema(annotation, p.default)
            params_schema["properties"][p_name] = prop

            if p.default is inspect._empty:
                params_schema["required"].append(p_name)

        if not params_schema["required"]:
            params_schema.pop("required")

        description = inspect.getdoc(fn) or "No description provided."

        # provider = getattr(fn, "_py_tool_provider", None)
        # tags = getattr(fn, "_py_tool_tags", None)
        # capabilities = getattr(fn, "_py_tool_capabilities", None)

        entry = {
            "type": "function",
            "function": {
                "name": name,
                "description": description,
                "parameters": params_schema,
            },
        }
        # meta: dict[str, t.Any] = {}
        # if provider:
        #     meta["provider"] = provider
        # if tags:
        #     meta["tags"] = tags
        # if capabilities:
        #     meta["capabilities"] = capabilities
        # if meta:
        #     entry["x-pyagenity"] = meta

        tools.append(entry)

    return tools
invoke async
invoke(name, args, tool_call_id, config, state, callback_manager=Inject[CallbackManager])

Execute a specific tool by name with the provided arguments.

This method handles tool execution across all configured providers (local, MCP, Composio, LangChain) with comprehensive error handling, event publishing, and callback management.

Parameters:

Name Type Description Default
name str

The name of the tool to execute.

required
args dict

Dictionary of arguments to pass to the tool function.

required
tool_call_id str

Unique identifier for this tool execution, used for tracking and result correlation.

required
config dict[str, Any]

Configuration dictionary containing execution context and user-specific settings.

required
state AgentState

Current agent state for context-aware tool execution.

required
callback_manager CallbackManager

Manager for executing pre/post execution callbacks. Injected via dependency injection if not provided.

Inject[CallbackManager]

Returns:

Type Description
Any

Message object containing tool execution results, either successful

Any

output or error information with appropriate status indicators.

Example
result = await tool_node.invoke(
    name="weather_tool",
    args={"location": "Paris", "units": "metric"},
    tool_call_id="call_abc123",
    config={"user_id": "user1", "session_id": "session1"},
    state=current_agent_state,
)

# result is a Message with tool execution results
print(result.content)  # Tool output or error information
Note

The method publishes execution events throughout the process for monitoring and debugging purposes. Tool execution is routed based on tool provider precedence: MCP → Composio → LangChain → Local.

Source code in pyagenity/graph/tool_node/base.py
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
async def invoke(  # noqa: PLR0915
    self,
    name: str,
    args: dict,
    tool_call_id: str,
    config: dict[str, t.Any],
    state: AgentState,
    callback_manager: CallbackManager = Inject[CallbackManager],
) -> t.Any:
    """Execute a specific tool by name with the provided arguments.

    This method handles tool execution across all configured providers (local,
    MCP, Composio, LangChain) with comprehensive error handling, event publishing,
    and callback management.

    Args:
        name: The name of the tool to execute.
        args: Dictionary of arguments to pass to the tool function.
        tool_call_id: Unique identifier for this tool execution, used for
            tracking and result correlation.
        config: Configuration dictionary containing execution context and
            user-specific settings.
        state: Current agent state for context-aware tool execution.
        callback_manager: Manager for executing pre/post execution callbacks.
            Injected via dependency injection if not provided.

    Returns:
        Message object containing tool execution results, either successful
        output or error information with appropriate status indicators.

    Raises:
        The method handles all exceptions internally and returns error Messages
        rather than raising exceptions, ensuring robust execution flow.

    Example:
        ```python
        result = await tool_node.invoke(
            name="weather_tool",
            args={"location": "Paris", "units": "metric"},
            tool_call_id="call_abc123",
            config={"user_id": "user1", "session_id": "session1"},
            state=current_agent_state,
        )

        # result is a Message with tool execution results
        print(result.content)  # Tool output or error information
        ```

    Note:
        The method publishes execution events throughout the process for
        monitoring and debugging purposes. Tool execution is routed based
        on tool provider precedence: MCP → Composio → LangChain → Local.
    """
    logger.info("Executing tool '%s' with %d arguments", name, len(args))
    logger.debug("Tool arguments: %s", args)

    event = EventModel.default(
        config,
        data={"args": args, "tool_call_id": tool_call_id, "function_name": name},
        content_type=[ContentType.TOOL_CALL],
        event=Event.TOOL_EXECUTION,
    )
    event.node_name = name
    # Attach structured tool call block
    with contextlib.suppress(Exception):
        event.content_blocks = [ToolCallBlock(id=tool_call_id, name=name, args=args)]
    publish_event(event)

    if name in self.mcp_tools:
        event.metadata["is_mcp"] = True
        publish_event(event)
        res = await self._mcp_execute(
            name,
            args,
            tool_call_id,
            config,
            callback_manager,
        )
        event.data["message"] = res.model_dump()
        # Attach tool result block mirroring the tool output
        with contextlib.suppress(Exception):
            event.content_blocks = [
                ToolResultBlock(call_id=tool_call_id, output=res.model_dump())
            ]
        event.event_type = EventType.END
        event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
        publish_event(event)
        return res

    if name in self.composio_tools:
        event.metadata["is_composio"] = True
        publish_event(event)
        res = await self._composio_execute(
            name,
            args,
            tool_call_id,
            config,
            callback_manager,
        )
        event.data["message"] = res.model_dump()
        with contextlib.suppress(Exception):
            event.content_blocks = [
                ToolResultBlock(call_id=tool_call_id, output=res.model_dump())
            ]
        event.event_type = EventType.END
        event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
        publish_event(event)
        return res

    if name in self.langchain_tools:
        event.metadata["is_langchain"] = True
        publish_event(event)
        res = await self._langchain_execute(
            name,
            args,
            tool_call_id,
            config,
            callback_manager,
        )
        event.data["message"] = res.model_dump()
        with contextlib.suppress(Exception):
            event.content_blocks = [
                ToolResultBlock(call_id=tool_call_id, output=res.model_dump())
            ]
        event.event_type = EventType.END
        event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
        publish_event(event)
        return res

    if name in self._funcs:
        event.metadata["is_mcp"] = False
        publish_event(event)
        res = await self._internal_execute(
            name,
            args,
            tool_call_id,
            config,
            state,
            callback_manager,
        )
        event.data["message"] = res.model_dump()
        with contextlib.suppress(Exception):
            event.content_blocks = [
                ToolResultBlock(call_id=tool_call_id, output=res.model_dump())
            ]
        event.event_type = EventType.END
        event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
        publish_event(event)
        return res

    error_msg = f"Tool '{name}' not found."
    event.data["error"] = error_msg
    event.event_type = EventType.ERROR
    event.content_type = [ContentType.TOOL_RESULT, ContentType.ERROR]
    publish_event(event)
    return Message.tool_message(
        content=[
            ErrorBlock(message=error_msg),
            ToolResultBlock(
                call_id=tool_call_id,
                output=error_msg,
                is_error=True,
                status="failed",
            ),
        ],
    )
stream async
stream(name, args, tool_call_id, config, state, callback_manager=Inject[CallbackManager])

Execute a tool with streaming support, yielding incremental results.

Similar to invoke() but designed for tools that can provide streaming responses or when you want to process results as they become available. Currently, most tool providers return complete results, so this method typically yields a single Message with the full result.

Parameters:

Name Type Description Default
name str

The name of the tool to execute.

required
args dict

Dictionary of arguments to pass to the tool function.

required
tool_call_id str

Unique identifier for this tool execution.

required
config dict[str, Any]

Configuration dictionary containing execution context.

required
state AgentState

Current agent state for context-aware tool execution.

required
callback_manager CallbackManager

Manager for executing pre/post execution callbacks.

Inject[CallbackManager]

Yields:

Type Description
AsyncIterator[Message]

Message objects containing tool execution results or status updates.

AsyncIterator[Message]

For most tools, this will yield a single complete result Message.

Example
async for message in tool_node.stream(
    name="data_processor",
    args={"dataset": "large_data.csv"},
    tool_call_id="call_stream123",
    config={"user_id": "user1"},
    state=current_state,
):
    print(f"Received: {message.content}")
    # Process each streamed result
Note

The streaming interface is designed for future expansion where tools may provide true streaming responses. Currently, it provides a consistent async iterator interface over tool results.

Source code in pyagenity/graph/tool_node/base.py
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
async def stream(  # noqa: PLR0915
    self,
    name: str,
    args: dict,
    tool_call_id: str,
    config: dict[str, t.Any],
    state: AgentState,
    callback_manager: CallbackManager = Inject[CallbackManager],
) -> t.AsyncIterator[Message]:
    """Execute a tool with streaming support, yielding incremental results.

    Similar to invoke() but designed for tools that can provide streaming responses
    or when you want to process results as they become available. Currently,
    most tool providers return complete results, so this method typically yields
    a single Message with the full result.

    Args:
        name: The name of the tool to execute.
        args: Dictionary of arguments to pass to the tool function.
        tool_call_id: Unique identifier for this tool execution.
        config: Configuration dictionary containing execution context.
        state: Current agent state for context-aware tool execution.
        callback_manager: Manager for executing pre/post execution callbacks.

    Yields:
        Message objects containing tool execution results or status updates.
        For most tools, this will yield a single complete result Message.

    Example:
        ```python
        async for message in tool_node.stream(
            name="data_processor",
            args={"dataset": "large_data.csv"},
            tool_call_id="call_stream123",
            config={"user_id": "user1"},
            state=current_state,
        ):
            print(f"Received: {message.content}")
            # Process each streamed result
        ```

    Note:
        The streaming interface is designed for future expansion where tools
        may provide true streaming responses. Currently, it provides a
        consistent async iterator interface over tool results.
    """
    logger.info("Executing tool '%s' with %d arguments", name, len(args))
    logger.debug("Tool arguments: %s", args)
    event = EventModel.default(
        config,
        data={"args": args, "tool_call_id": tool_call_id, "function_name": name},
        content_type=[ContentType.TOOL_CALL],
        event=Event.TOOL_EXECUTION,
    )
    event.node_name = "ToolNode"
    with contextlib.suppress(Exception):
        event.content_blocks = [ToolCallBlock(id=tool_call_id, name=name, args=args)]

    if name in self.mcp_tools:
        event.metadata["function_type"] = "mcp"
        publish_event(event)
        message = await self._mcp_execute(
            name,
            args,
            tool_call_id,
            config,
            callback_manager,
        )
        event.data["message"] = message.model_dump()
        with contextlib.suppress(Exception):
            event.content_blocks = [
                ToolResultBlock(call_id=tool_call_id, output=message.model_dump())
            ]
        event.event_type = EventType.END
        event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
        publish_event(event)
        yield message
        return

    if name in self.composio_tools:
        event.metadata["function_type"] = "composio"
        publish_event(event)
        message = await self._composio_execute(
            name,
            args,
            tool_call_id,
            config,
            callback_manager,
        )
        event.data["message"] = message.model_dump()
        with contextlib.suppress(Exception):
            event.content_blocks = [
                ToolResultBlock(call_id=tool_call_id, output=message.model_dump())
            ]
        event.event_type = EventType.END
        event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
        publish_event(event)
        yield message
        return

    if name in self.langchain_tools:
        event.metadata["function_type"] = "langchain"
        publish_event(event)
        message = await self._langchain_execute(
            name,
            args,
            tool_call_id,
            config,
            callback_manager,
        )
        event.data["message"] = message.model_dump()
        with contextlib.suppress(Exception):
            event.content_blocks = [
                ToolResultBlock(call_id=tool_call_id, output=message.model_dump())
            ]
        event.event_type = EventType.END
        event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
        publish_event(event)
        yield message
        return

    if name in self._funcs:
        event.metadata["function_type"] = "internal"
        publish_event(event)

        result = await self._internal_execute(
            name,
            args,
            tool_call_id,
            config,
            state,
            callback_manager,
        )
        event.data["message"] = result.model_dump()
        with contextlib.suppress(Exception):
            event.content_blocks = [
                ToolResultBlock(call_id=tool_call_id, output=result.model_dump())
            ]
        event.event_type = EventType.END
        event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
        publish_event(event)
        yield result
        return

    error_msg = f"Tool '{name}' not found."
    event.data["error"] = error_msg
    event.event_type = EventType.ERROR
    event.content_type = [ContentType.TOOL_RESULT, ContentType.ERROR]
    publish_event(event)

    yield Message.tool_message(
        content=[
            ErrorBlock(message=error_msg),
            ToolResultBlock(
                call_id=tool_call_id,
                output=error_msg,
                is_error=True,
                status="failed",
            ),
        ],
    )

Modules

base

Tool execution node for PyAgenity graph workflows.

This module provides the ToolNode class, which serves as a unified registry and executor for callable functions from various sources including local functions, MCP (Model Context Protocol) tools, Composio adapters, and LangChain tools. The ToolNode is designed with a modular architecture using mixins to handle different tool providers.

The ToolNode maintains compatibility with PyAgenity's dependency injection system and publishes execution events for monitoring and debugging purposes.

Typical usage example
def my_tool(query: str) -> str:
    return f"Result for: {query}"


tools = ToolNode([my_tool])
result = await tools.invoke("my_tool", {"query": "test"}, "call_id", config, state)

Classes:

Name Description
ToolNode

A unified registry and executor for callable functions from various tool providers.

Attributes:

Name Type Description
logger
Attributes
logger module-attribute
logger = getLogger(__name__)
Classes
ToolNode

Bases: SchemaMixin, LocalExecMixin, MCPMixin, ComposioMixin, LangChainMixin, KwargsResolverMixin

A unified registry and executor for callable functions from various tool providers.

ToolNode serves as the central hub for managing and executing tools from multiple sources: - Local Python functions - MCP (Model Context Protocol) tools - Composio adapter tools - LangChain tools

The class uses a mixin-based architecture to separate concerns and maintain clean integration with different tool providers. It provides both synchronous and asynchronous execution methods with comprehensive event publishing and error handling.

Attributes:

Name Type Description
_funcs dict[str, Callable]

Dictionary mapping function names to callable functions.

_client Client | None

Optional MCP client for remote tool execution.

_composio ComposioAdapter | None

Optional Composio adapter for external integrations.

_langchain Any | None

Optional LangChain adapter for LangChain tools.

mcp_tools list[str]

List of available MCP tool names.

composio_tools list[str]

List of available Composio tool names.

langchain_tools list[str]

List of available LangChain tool names.

Example
# Define local tools
def weather_tool(location: str) -> str:
    return f"Weather in {location}: Sunny, 25°C"


def calculator(a: int, b: int) -> int:
    return a + b


# Create ToolNode with local functions
tools = ToolNode([weather_tool, calculator])

# Execute a tool
result = await tools.invoke(
    name="weather_tool",
    args={"location": "New York"},
    tool_call_id="call_123",
    config={"user_id": "user1"},
    state=agent_state,
)

Methods:

Name Description
__init__

Initialize ToolNode with functions and optional tool adapters.

all_tools

Get all available tools from all configured providers.

all_tools_sync

Synchronously get all available tools from all configured providers.

get_local_tool

Generate OpenAI-compatible tool definitions for all registered local functions.

invoke

Execute a specific tool by name with the provided arguments.

stream

Execute a tool with streaming support, yielding incremental results.

Source code in pyagenity/graph/tool_node/base.py
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
class ToolNode(
    SchemaMixin,
    LocalExecMixin,
    MCPMixin,
    ComposioMixin,
    LangChainMixin,
    KwargsResolverMixin,
):
    """A unified registry and executor for callable functions from various tool providers.

    ToolNode serves as the central hub for managing and executing tools from multiple sources:
    - Local Python functions
    - MCP (Model Context Protocol) tools
    - Composio adapter tools
    - LangChain tools

    The class uses a mixin-based architecture to separate concerns and maintain clean
    integration with different tool providers. It provides both synchronous and asynchronous
    execution methods with comprehensive event publishing and error handling.

    Attributes:
        _funcs: Dictionary mapping function names to callable functions.
        _client: Optional MCP client for remote tool execution.
        _composio: Optional Composio adapter for external integrations.
        _langchain: Optional LangChain adapter for LangChain tools.
        mcp_tools: List of available MCP tool names.
        composio_tools: List of available Composio tool names.
        langchain_tools: List of available LangChain tool names.

    Example:
        ```python
        # Define local tools
        def weather_tool(location: str) -> str:
            return f"Weather in {location}: Sunny, 25°C"


        def calculator(a: int, b: int) -> int:
            return a + b


        # Create ToolNode with local functions
        tools = ToolNode([weather_tool, calculator])

        # Execute a tool
        result = await tools.invoke(
            name="weather_tool",
            args={"location": "New York"},
            tool_call_id="call_123",
            config={"user_id": "user1"},
            state=agent_state,
        )
        ```
    """

    def __init__(
        self,
        functions: t.Iterable[t.Callable],
        client: deps.Client | None = None,  # type: ignore
        composio_adapter: ComposioAdapter | None = None,
        langchain_adapter: t.Any | None = None,
    ) -> None:
        """Initialize ToolNode with functions and optional tool adapters.

        Args:
            functions: Iterable of callable functions to register as tools. Each function
                will be registered with its `__name__` as the tool identifier.
            client: Optional MCP (Model Context Protocol) client for remote tool access.
                Requires 'fastmcp' and 'mcp' packages to be installed.
            composio_adapter: Optional Composio adapter for external integrations and
                third-party API access.
            langchain_adapter: Optional LangChain adapter for accessing LangChain tools
                and integrations.

        Raises:
            ImportError: If MCP client is provided but required packages are not installed.
            TypeError: If any item in functions is not callable.

        Note:
            When using MCP client functionality, ensure you have installed the required
            dependencies with: `pip install pyagenity[mcp]`
        """
        logger.info("Initializing ToolNode with %d functions", len(list(functions)))

        if client is not None:
            # Read flags dynamically so tests can patch pyagenity.graph.tool_node.HAS_*
            mod = sys.modules.get("pyagenity.graph.tool_node")
            has_fastmcp = getattr(mod, "HAS_FASTMCP", deps.HAS_FASTMCP) if mod else deps.HAS_FASTMCP
            has_mcp = getattr(mod, "HAS_MCP", deps.HAS_MCP) if mod else deps.HAS_MCP

            if not has_fastmcp or not has_mcp:
                raise ImportError(
                    "MCP client functionality requires 'fastmcp' and 'mcp' packages. "
                    "Install with: pip install pyagenity[mcp]"
                )
            logger.debug("ToolNode initialized with MCP client")

        self._funcs: dict[str, t.Callable] = {}
        self._client: deps.Client | None = client  # type: ignore
        self._composio: ComposioAdapter | None = composio_adapter
        self._langchain: t.Any | None = langchain_adapter

        for fn in functions:
            if not callable(fn):
                raise TypeError("ToolNode only accepts callables")
            self._funcs[fn.__name__] = fn

        self.mcp_tools: list[str] = []
        self.composio_tools: list[str] = []
        self.langchain_tools: list[str] = []

    async def _all_tools_async(self) -> list[dict]:
        tools: list[dict] = self.get_local_tool()
        tools.extend(await self._get_mcp_tool())
        tools.extend(await self._get_composio_tools())
        tools.extend(await self._get_langchain_tools())
        return tools

    async def all_tools(self) -> list[dict]:
        """Get all available tools from all configured providers.

        Retrieves and combines tool definitions from local functions, MCP client,
        Composio adapter, and LangChain adapter. Each tool definition includes
        the function schema with parameters and descriptions.

        Returns:
            List of tool definitions in OpenAI function calling format. Each dict
            contains 'type': 'function' and 'function' with name, description,
            and parameters schema.

        Example:
            ```python
            tools = await tool_node.all_tools()
            # Returns:
            # [
            #   {
            #     "type": "function",
            #     "function": {
            #       "name": "weather_tool",
            #       "description": "Get weather information for a location",
            #       "parameters": {
            #         "type": "object",
            #         "properties": {
            #           "location": {"type": "string"}
            #         },
            #         "required": ["location"]
            #       }
            #     }
            #   }
            # ]
            ```
        """
        return await self._all_tools_async()

    def all_tools_sync(self) -> list[dict]:
        """Synchronously get all available tools from all configured providers.

        This is a synchronous wrapper around the async all_tools() method.
        It uses asyncio.run() to handle async operations from MCP, Composio,
        and LangChain adapters.

        Returns:
            List of tool definitions in OpenAI function calling format.

        Note:
            Prefer using the async `all_tools()` method when possible, especially
            in async contexts, to avoid potential event loop issues.
        """
        tools: list[dict] = self.get_local_tool()
        if self._client:
            result = asyncio.run(self._get_mcp_tool())
            if result:
                tools.extend(result)
        comp = asyncio.run(self._get_composio_tools())
        if comp:
            tools.extend(comp)
        lc = asyncio.run(self._get_langchain_tools())
        if lc:
            tools.extend(lc)
        return tools

    async def invoke(  # noqa: PLR0915
        self,
        name: str,
        args: dict,
        tool_call_id: str,
        config: dict[str, t.Any],
        state: AgentState,
        callback_manager: CallbackManager = Inject[CallbackManager],
    ) -> t.Any:
        """Execute a specific tool by name with the provided arguments.

        This method handles tool execution across all configured providers (local,
        MCP, Composio, LangChain) with comprehensive error handling, event publishing,
        and callback management.

        Args:
            name: The name of the tool to execute.
            args: Dictionary of arguments to pass to the tool function.
            tool_call_id: Unique identifier for this tool execution, used for
                tracking and result correlation.
            config: Configuration dictionary containing execution context and
                user-specific settings.
            state: Current agent state for context-aware tool execution.
            callback_manager: Manager for executing pre/post execution callbacks.
                Injected via dependency injection if not provided.

        Returns:
            Message object containing tool execution results, either successful
            output or error information with appropriate status indicators.

        Raises:
            The method handles all exceptions internally and returns error Messages
            rather than raising exceptions, ensuring robust execution flow.

        Example:
            ```python
            result = await tool_node.invoke(
                name="weather_tool",
                args={"location": "Paris", "units": "metric"},
                tool_call_id="call_abc123",
                config={"user_id": "user1", "session_id": "session1"},
                state=current_agent_state,
            )

            # result is a Message with tool execution results
            print(result.content)  # Tool output or error information
            ```

        Note:
            The method publishes execution events throughout the process for
            monitoring and debugging purposes. Tool execution is routed based
            on tool provider precedence: MCP → Composio → LangChain → Local.
        """
        logger.info("Executing tool '%s' with %d arguments", name, len(args))
        logger.debug("Tool arguments: %s", args)

        event = EventModel.default(
            config,
            data={"args": args, "tool_call_id": tool_call_id, "function_name": name},
            content_type=[ContentType.TOOL_CALL],
            event=Event.TOOL_EXECUTION,
        )
        event.node_name = name
        # Attach structured tool call block
        with contextlib.suppress(Exception):
            event.content_blocks = [ToolCallBlock(id=tool_call_id, name=name, args=args)]
        publish_event(event)

        if name in self.mcp_tools:
            event.metadata["is_mcp"] = True
            publish_event(event)
            res = await self._mcp_execute(
                name,
                args,
                tool_call_id,
                config,
                callback_manager,
            )
            event.data["message"] = res.model_dump()
            # Attach tool result block mirroring the tool output
            with contextlib.suppress(Exception):
                event.content_blocks = [
                    ToolResultBlock(call_id=tool_call_id, output=res.model_dump())
                ]
            event.event_type = EventType.END
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            return res

        if name in self.composio_tools:
            event.metadata["is_composio"] = True
            publish_event(event)
            res = await self._composio_execute(
                name,
                args,
                tool_call_id,
                config,
                callback_manager,
            )
            event.data["message"] = res.model_dump()
            with contextlib.suppress(Exception):
                event.content_blocks = [
                    ToolResultBlock(call_id=tool_call_id, output=res.model_dump())
                ]
            event.event_type = EventType.END
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            return res

        if name in self.langchain_tools:
            event.metadata["is_langchain"] = True
            publish_event(event)
            res = await self._langchain_execute(
                name,
                args,
                tool_call_id,
                config,
                callback_manager,
            )
            event.data["message"] = res.model_dump()
            with contextlib.suppress(Exception):
                event.content_blocks = [
                    ToolResultBlock(call_id=tool_call_id, output=res.model_dump())
                ]
            event.event_type = EventType.END
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            return res

        if name in self._funcs:
            event.metadata["is_mcp"] = False
            publish_event(event)
            res = await self._internal_execute(
                name,
                args,
                tool_call_id,
                config,
                state,
                callback_manager,
            )
            event.data["message"] = res.model_dump()
            with contextlib.suppress(Exception):
                event.content_blocks = [
                    ToolResultBlock(call_id=tool_call_id, output=res.model_dump())
                ]
            event.event_type = EventType.END
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            return res

        error_msg = f"Tool '{name}' not found."
        event.data["error"] = error_msg
        event.event_type = EventType.ERROR
        event.content_type = [ContentType.TOOL_RESULT, ContentType.ERROR]
        publish_event(event)
        return Message.tool_message(
            content=[
                ErrorBlock(message=error_msg),
                ToolResultBlock(
                    call_id=tool_call_id,
                    output=error_msg,
                    is_error=True,
                    status="failed",
                ),
            ],
        )

    async def stream(  # noqa: PLR0915
        self,
        name: str,
        args: dict,
        tool_call_id: str,
        config: dict[str, t.Any],
        state: AgentState,
        callback_manager: CallbackManager = Inject[CallbackManager],
    ) -> t.AsyncIterator[Message]:
        """Execute a tool with streaming support, yielding incremental results.

        Similar to invoke() but designed for tools that can provide streaming responses
        or when you want to process results as they become available. Currently,
        most tool providers return complete results, so this method typically yields
        a single Message with the full result.

        Args:
            name: The name of the tool to execute.
            args: Dictionary of arguments to pass to the tool function.
            tool_call_id: Unique identifier for this tool execution.
            config: Configuration dictionary containing execution context.
            state: Current agent state for context-aware tool execution.
            callback_manager: Manager for executing pre/post execution callbacks.

        Yields:
            Message objects containing tool execution results or status updates.
            For most tools, this will yield a single complete result Message.

        Example:
            ```python
            async for message in tool_node.stream(
                name="data_processor",
                args={"dataset": "large_data.csv"},
                tool_call_id="call_stream123",
                config={"user_id": "user1"},
                state=current_state,
            ):
                print(f"Received: {message.content}")
                # Process each streamed result
            ```

        Note:
            The streaming interface is designed for future expansion where tools
            may provide true streaming responses. Currently, it provides a
            consistent async iterator interface over tool results.
        """
        logger.info("Executing tool '%s' with %d arguments", name, len(args))
        logger.debug("Tool arguments: %s", args)
        event = EventModel.default(
            config,
            data={"args": args, "tool_call_id": tool_call_id, "function_name": name},
            content_type=[ContentType.TOOL_CALL],
            event=Event.TOOL_EXECUTION,
        )
        event.node_name = "ToolNode"
        with contextlib.suppress(Exception):
            event.content_blocks = [ToolCallBlock(id=tool_call_id, name=name, args=args)]

        if name in self.mcp_tools:
            event.metadata["function_type"] = "mcp"
            publish_event(event)
            message = await self._mcp_execute(
                name,
                args,
                tool_call_id,
                config,
                callback_manager,
            )
            event.data["message"] = message.model_dump()
            with contextlib.suppress(Exception):
                event.content_blocks = [
                    ToolResultBlock(call_id=tool_call_id, output=message.model_dump())
                ]
            event.event_type = EventType.END
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            yield message
            return

        if name in self.composio_tools:
            event.metadata["function_type"] = "composio"
            publish_event(event)
            message = await self._composio_execute(
                name,
                args,
                tool_call_id,
                config,
                callback_manager,
            )
            event.data["message"] = message.model_dump()
            with contextlib.suppress(Exception):
                event.content_blocks = [
                    ToolResultBlock(call_id=tool_call_id, output=message.model_dump())
                ]
            event.event_type = EventType.END
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            yield message
            return

        if name in self.langchain_tools:
            event.metadata["function_type"] = "langchain"
            publish_event(event)
            message = await self._langchain_execute(
                name,
                args,
                tool_call_id,
                config,
                callback_manager,
            )
            event.data["message"] = message.model_dump()
            with contextlib.suppress(Exception):
                event.content_blocks = [
                    ToolResultBlock(call_id=tool_call_id, output=message.model_dump())
                ]
            event.event_type = EventType.END
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            yield message
            return

        if name in self._funcs:
            event.metadata["function_type"] = "internal"
            publish_event(event)

            result = await self._internal_execute(
                name,
                args,
                tool_call_id,
                config,
                state,
                callback_manager,
            )
            event.data["message"] = result.model_dump()
            with contextlib.suppress(Exception):
                event.content_blocks = [
                    ToolResultBlock(call_id=tool_call_id, output=result.model_dump())
                ]
            event.event_type = EventType.END
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            yield result
            return

        error_msg = f"Tool '{name}' not found."
        event.data["error"] = error_msg
        event.event_type = EventType.ERROR
        event.content_type = [ContentType.TOOL_RESULT, ContentType.ERROR]
        publish_event(event)

        yield Message.tool_message(
            content=[
                ErrorBlock(message=error_msg),
                ToolResultBlock(
                    call_id=tool_call_id,
                    output=error_msg,
                    is_error=True,
                    status="failed",
                ),
            ],
        )
Attributes
composio_tools instance-attribute
composio_tools = []
langchain_tools instance-attribute
langchain_tools = []
mcp_tools instance-attribute
mcp_tools = []
Functions
__init__
__init__(functions, client=None, composio_adapter=None, langchain_adapter=None)

Initialize ToolNode with functions and optional tool adapters.

Parameters:

Name Type Description Default
functions Iterable[Callable]

Iterable of callable functions to register as tools. Each function will be registered with its __name__ as the tool identifier.

required
client Client | None

Optional MCP (Model Context Protocol) client for remote tool access. Requires 'fastmcp' and 'mcp' packages to be installed.

None
composio_adapter ComposioAdapter | None

Optional Composio adapter for external integrations and third-party API access.

None
langchain_adapter Any | None

Optional LangChain adapter for accessing LangChain tools and integrations.

None

Raises:

Type Description
ImportError

If MCP client is provided but required packages are not installed.

TypeError

If any item in functions is not callable.

Note

When using MCP client functionality, ensure you have installed the required dependencies with: pip install pyagenity[mcp]

Source code in pyagenity/graph/tool_node/base.py
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
def __init__(
    self,
    functions: t.Iterable[t.Callable],
    client: deps.Client | None = None,  # type: ignore
    composio_adapter: ComposioAdapter | None = None,
    langchain_adapter: t.Any | None = None,
) -> None:
    """Initialize ToolNode with functions and optional tool adapters.

    Args:
        functions: Iterable of callable functions to register as tools. Each function
            will be registered with its `__name__` as the tool identifier.
        client: Optional MCP (Model Context Protocol) client for remote tool access.
            Requires 'fastmcp' and 'mcp' packages to be installed.
        composio_adapter: Optional Composio adapter for external integrations and
            third-party API access.
        langchain_adapter: Optional LangChain adapter for accessing LangChain tools
            and integrations.

    Raises:
        ImportError: If MCP client is provided but required packages are not installed.
        TypeError: If any item in functions is not callable.

    Note:
        When using MCP client functionality, ensure you have installed the required
        dependencies with: `pip install pyagenity[mcp]`
    """
    logger.info("Initializing ToolNode with %d functions", len(list(functions)))

    if client is not None:
        # Read flags dynamically so tests can patch pyagenity.graph.tool_node.HAS_*
        mod = sys.modules.get("pyagenity.graph.tool_node")
        has_fastmcp = getattr(mod, "HAS_FASTMCP", deps.HAS_FASTMCP) if mod else deps.HAS_FASTMCP
        has_mcp = getattr(mod, "HAS_MCP", deps.HAS_MCP) if mod else deps.HAS_MCP

        if not has_fastmcp or not has_mcp:
            raise ImportError(
                "MCP client functionality requires 'fastmcp' and 'mcp' packages. "
                "Install with: pip install pyagenity[mcp]"
            )
        logger.debug("ToolNode initialized with MCP client")

    self._funcs: dict[str, t.Callable] = {}
    self._client: deps.Client | None = client  # type: ignore
    self._composio: ComposioAdapter | None = composio_adapter
    self._langchain: t.Any | None = langchain_adapter

    for fn in functions:
        if not callable(fn):
            raise TypeError("ToolNode only accepts callables")
        self._funcs[fn.__name__] = fn

    self.mcp_tools: list[str] = []
    self.composio_tools: list[str] = []
    self.langchain_tools: list[str] = []
all_tools async
all_tools()

Get all available tools from all configured providers.

Retrieves and combines tool definitions from local functions, MCP client, Composio adapter, and LangChain adapter. Each tool definition includes the function schema with parameters and descriptions.

Returns:

Type Description
list[dict]

List of tool definitions in OpenAI function calling format. Each dict

list[dict]

contains 'type': 'function' and 'function' with name, description,

list[dict]

and parameters schema.

Example
tools = await tool_node.all_tools()
# Returns:
# [
#   {
#     "type": "function",
#     "function": {
#       "name": "weather_tool",
#       "description": "Get weather information for a location",
#       "parameters": {
#         "type": "object",
#         "properties": {
#           "location": {"type": "string"}
#         },
#         "required": ["location"]
#       }
#     }
#   }
# ]
Source code in pyagenity/graph/tool_node/base.py
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
async def all_tools(self) -> list[dict]:
    """Get all available tools from all configured providers.

    Retrieves and combines tool definitions from local functions, MCP client,
    Composio adapter, and LangChain adapter. Each tool definition includes
    the function schema with parameters and descriptions.

    Returns:
        List of tool definitions in OpenAI function calling format. Each dict
        contains 'type': 'function' and 'function' with name, description,
        and parameters schema.

    Example:
        ```python
        tools = await tool_node.all_tools()
        # Returns:
        # [
        #   {
        #     "type": "function",
        #     "function": {
        #       "name": "weather_tool",
        #       "description": "Get weather information for a location",
        #       "parameters": {
        #         "type": "object",
        #         "properties": {
        #           "location": {"type": "string"}
        #         },
        #         "required": ["location"]
        #       }
        #     }
        #   }
        # ]
        ```
    """
    return await self._all_tools_async()
all_tools_sync
all_tools_sync()

Synchronously get all available tools from all configured providers.

This is a synchronous wrapper around the async all_tools() method. It uses asyncio.run() to handle async operations from MCP, Composio, and LangChain adapters.

Returns:

Type Description
list[dict]

List of tool definitions in OpenAI function calling format.

Note

Prefer using the async all_tools() method when possible, especially in async contexts, to avoid potential event loop issues.

Source code in pyagenity/graph/tool_node/base.py
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
def all_tools_sync(self) -> list[dict]:
    """Synchronously get all available tools from all configured providers.

    This is a synchronous wrapper around the async all_tools() method.
    It uses asyncio.run() to handle async operations from MCP, Composio,
    and LangChain adapters.

    Returns:
        List of tool definitions in OpenAI function calling format.

    Note:
        Prefer using the async `all_tools()` method when possible, especially
        in async contexts, to avoid potential event loop issues.
    """
    tools: list[dict] = self.get_local_tool()
    if self._client:
        result = asyncio.run(self._get_mcp_tool())
        if result:
            tools.extend(result)
    comp = asyncio.run(self._get_composio_tools())
    if comp:
        tools.extend(comp)
    lc = asyncio.run(self._get_langchain_tools())
    if lc:
        tools.extend(lc)
    return tools
get_local_tool
get_local_tool()

Generate OpenAI-compatible tool definitions for all registered local functions.

Inspects all registered functions in _funcs and automatically generates tool schemas by analyzing function signatures, type annotations, and docstrings. Excludes injectable parameters that are provided by the framework.

Returns:

Type Description
list[dict]

List of tool definitions in OpenAI function calling format. Each

list[dict]

definition includes the function name, description (from docstring),

list[dict]

and complete parameter schema with types and required fields.

Example

For a function:

def calculate(a: int, b: int, operation: str = "add") -> int:
    '''Perform arithmetic calculation.'''
    return a + b if operation == "add" else a - b

Returns:

[
    {
        "type": "function",
        "function": {
            "name": "calculate",
            "description": "Perform arithmetic calculation.",
            "parameters": {
                "type": "object",
                "properties": {
                    "a": {"type": "integer"},
                    "b": {"type": "integer"},
                    "operation": {"type": "string", "default": "add"},
                },
                "required": ["a", "b"],
            },
        },
    }
]

Note

Parameters listed in INJECTABLE_PARAMS (like 'state', 'config', 'tool_call_id') are automatically excluded from the generated schema as they are provided by the framework during execution.

Source code in pyagenity/graph/tool_node/schema.py
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
def get_local_tool(self) -> list[dict]:
    """Generate OpenAI-compatible tool definitions for all registered local functions.

    Inspects all registered functions in _funcs and automatically generates
    tool schemas by analyzing function signatures, type annotations, and docstrings.
    Excludes injectable parameters that are provided by the framework.

    Returns:
        List of tool definitions in OpenAI function calling format. Each
        definition includes the function name, description (from docstring),
        and complete parameter schema with types and required fields.

    Example:
        For a function:
        ```python
        def calculate(a: int, b: int, operation: str = "add") -> int:
            '''Perform arithmetic calculation.'''
            return a + b if operation == "add" else a - b
        ```

        Returns:
        ```python
        [
            {
                "type": "function",
                "function": {
                    "name": "calculate",
                    "description": "Perform arithmetic calculation.",
                    "parameters": {
                        "type": "object",
                        "properties": {
                            "a": {"type": "integer"},
                            "b": {"type": "integer"},
                            "operation": {"type": "string", "default": "add"},
                        },
                        "required": ["a", "b"],
                    },
                },
            }
        ]
        ```

    Note:
        Parameters listed in INJECTABLE_PARAMS (like 'state', 'config',
        'tool_call_id') are automatically excluded from the generated schema
        as they are provided by the framework during execution.
    """
    tools: list[dict] = []
    for name, fn in self._funcs.items():
        sig = inspect.signature(fn)
        params_schema: dict = {"type": "object", "properties": {}, "required": []}

        for p_name, p in sig.parameters.items():
            if p.kind in (
                inspect.Parameter.VAR_POSITIONAL,
                inspect.Parameter.VAR_KEYWORD,
            ):
                continue

            if p_name in INJECTABLE_PARAMS:
                continue

            annotation = p.annotation if p.annotation is not inspect._empty else str
            prop = SchemaMixin._annotation_to_schema(annotation, p.default)
            params_schema["properties"][p_name] = prop

            if p.default is inspect._empty:
                params_schema["required"].append(p_name)

        if not params_schema["required"]:
            params_schema.pop("required")

        description = inspect.getdoc(fn) or "No description provided."

        # provider = getattr(fn, "_py_tool_provider", None)
        # tags = getattr(fn, "_py_tool_tags", None)
        # capabilities = getattr(fn, "_py_tool_capabilities", None)

        entry = {
            "type": "function",
            "function": {
                "name": name,
                "description": description,
                "parameters": params_schema,
            },
        }
        # meta: dict[str, t.Any] = {}
        # if provider:
        #     meta["provider"] = provider
        # if tags:
        #     meta["tags"] = tags
        # if capabilities:
        #     meta["capabilities"] = capabilities
        # if meta:
        #     entry["x-pyagenity"] = meta

        tools.append(entry)

    return tools
invoke async
invoke(name, args, tool_call_id, config, state, callback_manager=Inject[CallbackManager])

Execute a specific tool by name with the provided arguments.

This method handles tool execution across all configured providers (local, MCP, Composio, LangChain) with comprehensive error handling, event publishing, and callback management.

Parameters:

Name Type Description Default
name str

The name of the tool to execute.

required
args dict

Dictionary of arguments to pass to the tool function.

required
tool_call_id str

Unique identifier for this tool execution, used for tracking and result correlation.

required
config dict[str, Any]

Configuration dictionary containing execution context and user-specific settings.

required
state AgentState

Current agent state for context-aware tool execution.

required
callback_manager CallbackManager

Manager for executing pre/post execution callbacks. Injected via dependency injection if not provided.

Inject[CallbackManager]

Returns:

Type Description
Any

Message object containing tool execution results, either successful

Any

output or error information with appropriate status indicators.

Example
result = await tool_node.invoke(
    name="weather_tool",
    args={"location": "Paris", "units": "metric"},
    tool_call_id="call_abc123",
    config={"user_id": "user1", "session_id": "session1"},
    state=current_agent_state,
)

# result is a Message with tool execution results
print(result.content)  # Tool output or error information
Note

The method publishes execution events throughout the process for monitoring and debugging purposes. Tool execution is routed based on tool provider precedence: MCP → Composio → LangChain → Local.

Source code in pyagenity/graph/tool_node/base.py
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
async def invoke(  # noqa: PLR0915
    self,
    name: str,
    args: dict,
    tool_call_id: str,
    config: dict[str, t.Any],
    state: AgentState,
    callback_manager: CallbackManager = Inject[CallbackManager],
) -> t.Any:
    """Execute a specific tool by name with the provided arguments.

    This method handles tool execution across all configured providers (local,
    MCP, Composio, LangChain) with comprehensive error handling, event publishing,
    and callback management.

    Args:
        name: The name of the tool to execute.
        args: Dictionary of arguments to pass to the tool function.
        tool_call_id: Unique identifier for this tool execution, used for
            tracking and result correlation.
        config: Configuration dictionary containing execution context and
            user-specific settings.
        state: Current agent state for context-aware tool execution.
        callback_manager: Manager for executing pre/post execution callbacks.
            Injected via dependency injection if not provided.

    Returns:
        Message object containing tool execution results, either successful
        output or error information with appropriate status indicators.

    Raises:
        The method handles all exceptions internally and returns error Messages
        rather than raising exceptions, ensuring robust execution flow.

    Example:
        ```python
        result = await tool_node.invoke(
            name="weather_tool",
            args={"location": "Paris", "units": "metric"},
            tool_call_id="call_abc123",
            config={"user_id": "user1", "session_id": "session1"},
            state=current_agent_state,
        )

        # result is a Message with tool execution results
        print(result.content)  # Tool output or error information
        ```

    Note:
        The method publishes execution events throughout the process for
        monitoring and debugging purposes. Tool execution is routed based
        on tool provider precedence: MCP → Composio → LangChain → Local.
    """
    logger.info("Executing tool '%s' with %d arguments", name, len(args))
    logger.debug("Tool arguments: %s", args)

    event = EventModel.default(
        config,
        data={"args": args, "tool_call_id": tool_call_id, "function_name": name},
        content_type=[ContentType.TOOL_CALL],
        event=Event.TOOL_EXECUTION,
    )
    event.node_name = name
    # Attach structured tool call block
    with contextlib.suppress(Exception):
        event.content_blocks = [ToolCallBlock(id=tool_call_id, name=name, args=args)]
    publish_event(event)

    if name in self.mcp_tools:
        event.metadata["is_mcp"] = True
        publish_event(event)
        res = await self._mcp_execute(
            name,
            args,
            tool_call_id,
            config,
            callback_manager,
        )
        event.data["message"] = res.model_dump()
        # Attach tool result block mirroring the tool output
        with contextlib.suppress(Exception):
            event.content_blocks = [
                ToolResultBlock(call_id=tool_call_id, output=res.model_dump())
            ]
        event.event_type = EventType.END
        event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
        publish_event(event)
        return res

    if name in self.composio_tools:
        event.metadata["is_composio"] = True
        publish_event(event)
        res = await self._composio_execute(
            name,
            args,
            tool_call_id,
            config,
            callback_manager,
        )
        event.data["message"] = res.model_dump()
        with contextlib.suppress(Exception):
            event.content_blocks = [
                ToolResultBlock(call_id=tool_call_id, output=res.model_dump())
            ]
        event.event_type = EventType.END
        event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
        publish_event(event)
        return res

    if name in self.langchain_tools:
        event.metadata["is_langchain"] = True
        publish_event(event)
        res = await self._langchain_execute(
            name,
            args,
            tool_call_id,
            config,
            callback_manager,
        )
        event.data["message"] = res.model_dump()
        with contextlib.suppress(Exception):
            event.content_blocks = [
                ToolResultBlock(call_id=tool_call_id, output=res.model_dump())
            ]
        event.event_type = EventType.END
        event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
        publish_event(event)
        return res

    if name in self._funcs:
        event.metadata["is_mcp"] = False
        publish_event(event)
        res = await self._internal_execute(
            name,
            args,
            tool_call_id,
            config,
            state,
            callback_manager,
        )
        event.data["message"] = res.model_dump()
        with contextlib.suppress(Exception):
            event.content_blocks = [
                ToolResultBlock(call_id=tool_call_id, output=res.model_dump())
            ]
        event.event_type = EventType.END
        event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
        publish_event(event)
        return res

    error_msg = f"Tool '{name}' not found."
    event.data["error"] = error_msg
    event.event_type = EventType.ERROR
    event.content_type = [ContentType.TOOL_RESULT, ContentType.ERROR]
    publish_event(event)
    return Message.tool_message(
        content=[
            ErrorBlock(message=error_msg),
            ToolResultBlock(
                call_id=tool_call_id,
                output=error_msg,
                is_error=True,
                status="failed",
            ),
        ],
    )
stream async
stream(name, args, tool_call_id, config, state, callback_manager=Inject[CallbackManager])

Execute a tool with streaming support, yielding incremental results.

Similar to invoke() but designed for tools that can provide streaming responses or when you want to process results as they become available. Currently, most tool providers return complete results, so this method typically yields a single Message with the full result.

Parameters:

Name Type Description Default
name str

The name of the tool to execute.

required
args dict

Dictionary of arguments to pass to the tool function.

required
tool_call_id str

Unique identifier for this tool execution.

required
config dict[str, Any]

Configuration dictionary containing execution context.

required
state AgentState

Current agent state for context-aware tool execution.

required
callback_manager CallbackManager

Manager for executing pre/post execution callbacks.

Inject[CallbackManager]

Yields:

Type Description
AsyncIterator[Message]

Message objects containing tool execution results or status updates.

AsyncIterator[Message]

For most tools, this will yield a single complete result Message.

Example
async for message in tool_node.stream(
    name="data_processor",
    args={"dataset": "large_data.csv"},
    tool_call_id="call_stream123",
    config={"user_id": "user1"},
    state=current_state,
):
    print(f"Received: {message.content}")
    # Process each streamed result
Note

The streaming interface is designed for future expansion where tools may provide true streaming responses. Currently, it provides a consistent async iterator interface over tool results.

Source code in pyagenity/graph/tool_node/base.py
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
async def stream(  # noqa: PLR0915
    self,
    name: str,
    args: dict,
    tool_call_id: str,
    config: dict[str, t.Any],
    state: AgentState,
    callback_manager: CallbackManager = Inject[CallbackManager],
) -> t.AsyncIterator[Message]:
    """Execute a tool with streaming support, yielding incremental results.

    Similar to invoke() but designed for tools that can provide streaming responses
    or when you want to process results as they become available. Currently,
    most tool providers return complete results, so this method typically yields
    a single Message with the full result.

    Args:
        name: The name of the tool to execute.
        args: Dictionary of arguments to pass to the tool function.
        tool_call_id: Unique identifier for this tool execution.
        config: Configuration dictionary containing execution context.
        state: Current agent state for context-aware tool execution.
        callback_manager: Manager for executing pre/post execution callbacks.

    Yields:
        Message objects containing tool execution results or status updates.
        For most tools, this will yield a single complete result Message.

    Example:
        ```python
        async for message in tool_node.stream(
            name="data_processor",
            args={"dataset": "large_data.csv"},
            tool_call_id="call_stream123",
            config={"user_id": "user1"},
            state=current_state,
        ):
            print(f"Received: {message.content}")
            # Process each streamed result
        ```

    Note:
        The streaming interface is designed for future expansion where tools
        may provide true streaming responses. Currently, it provides a
        consistent async iterator interface over tool results.
    """
    logger.info("Executing tool '%s' with %d arguments", name, len(args))
    logger.debug("Tool arguments: %s", args)
    event = EventModel.default(
        config,
        data={"args": args, "tool_call_id": tool_call_id, "function_name": name},
        content_type=[ContentType.TOOL_CALL],
        event=Event.TOOL_EXECUTION,
    )
    event.node_name = "ToolNode"
    with contextlib.suppress(Exception):
        event.content_blocks = [ToolCallBlock(id=tool_call_id, name=name, args=args)]

    if name in self.mcp_tools:
        event.metadata["function_type"] = "mcp"
        publish_event(event)
        message = await self._mcp_execute(
            name,
            args,
            tool_call_id,
            config,
            callback_manager,
        )
        event.data["message"] = message.model_dump()
        with contextlib.suppress(Exception):
            event.content_blocks = [
                ToolResultBlock(call_id=tool_call_id, output=message.model_dump())
            ]
        event.event_type = EventType.END
        event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
        publish_event(event)
        yield message
        return

    if name in self.composio_tools:
        event.metadata["function_type"] = "composio"
        publish_event(event)
        message = await self._composio_execute(
            name,
            args,
            tool_call_id,
            config,
            callback_manager,
        )
        event.data["message"] = message.model_dump()
        with contextlib.suppress(Exception):
            event.content_blocks = [
                ToolResultBlock(call_id=tool_call_id, output=message.model_dump())
            ]
        event.event_type = EventType.END
        event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
        publish_event(event)
        yield message
        return

    if name in self.langchain_tools:
        event.metadata["function_type"] = "langchain"
        publish_event(event)
        message = await self._langchain_execute(
            name,
            args,
            tool_call_id,
            config,
            callback_manager,
        )
        event.data["message"] = message.model_dump()
        with contextlib.suppress(Exception):
            event.content_blocks = [
                ToolResultBlock(call_id=tool_call_id, output=message.model_dump())
            ]
        event.event_type = EventType.END
        event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
        publish_event(event)
        yield message
        return

    if name in self._funcs:
        event.metadata["function_type"] = "internal"
        publish_event(event)

        result = await self._internal_execute(
            name,
            args,
            tool_call_id,
            config,
            state,
            callback_manager,
        )
        event.data["message"] = result.model_dump()
        with contextlib.suppress(Exception):
            event.content_blocks = [
                ToolResultBlock(call_id=tool_call_id, output=result.model_dump())
            ]
        event.event_type = EventType.END
        event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
        publish_event(event)
        yield result
        return

    error_msg = f"Tool '{name}' not found."
    event.data["error"] = error_msg
    event.event_type = EventType.ERROR
    event.content_type = [ContentType.TOOL_RESULT, ContentType.ERROR]
    publish_event(event)

    yield Message.tool_message(
        content=[
            ErrorBlock(message=error_msg),
            ToolResultBlock(
                call_id=tool_call_id,
                output=error_msg,
                is_error=True,
                status="failed",
            ),
        ],
    )
Functions
Modules
constants

Constants for ToolNode package.

This module defines constants used throughout the ToolNode implementation, particularly parameter names that are automatically injected by the PyAgenity framework during tool execution. These parameters are excluded from tool schema generation since they are provided by the execution context.

The constants are separated into their own module to avoid circular imports and maintain a clean public API.

Parameter names that are automatically injected during tool execution.

These parameters are provided by the PyAgenity framework and should be excluded from tool schema generation. They represent execution context and framework services that are available to tool functions but not provided by the user.

Parameters:

Name Type Description Default
tool_call_id

Unique identifier for the current tool execution.

required
state

Current AgentState instance for context-aware execution.

required
config

Configuration dictionary with execution settings.

required
generated_id

Framework-generated identifier for various purposes.

required
context_manager

BaseContextManager instance for cross-node operations.

required
publisher

BasePublisher instance for event publishing.

required
checkpointer

BaseCheckpointer instance for state persistence.

required
store

BaseStore instance for data storage operations.

required
Note

Tool functions can declare these parameters in their signatures to receive the corresponding services, but they should not be included in the tool schema since they're not user-provided arguments.

Attributes:

Name Type Description
INJECTABLE_PARAMS
Attributes
INJECTABLE_PARAMS module-attribute
INJECTABLE_PARAMS = {'tool_call_id', 'state', 'config', 'generated_id', 'context_manager', 'publisher', 'checkpointer', 'store'}
deps

Dependency flags and optional imports for ToolNode.

This module manages optional third-party dependencies for the ToolNode implementation, providing clean import handling and feature flags. It isolates optional imports to prevent ImportError cascades when optional dependencies are not installed.

The module handles two main optional dependency groups: 1. MCP (Model Context Protocol) support via 'fastmcp' and 'mcp' packages 2. Future extensibility for other optional tool providers

By centralizing optional imports here, other modules can safely import the flags and types without triggering ImportError exceptions, allowing graceful degradation when optional features are not available.

Typical usage
from .deps import HAS_FASTMCP, HAS_MCP, Client

if HAS_FASTMCP and HAS_MCP:
    # Use MCP functionality
    client = Client(...)
else:
    # Graceful fallback or error message
    client = None

FastMCP integration support.

Boolean flag indicating whether FastMCP is available.

True if 'fastmcp' package is installed and imports successfully.

FastMCP Client class for connecting to MCP servers.

None if FastMCP is not available.

Result type for MCP tool executions.

None if FastMCP is not available.

Attributes:

Name Type Description
HAS_FASTMCP
HAS_MCP
Attributes
HAS_FASTMCP module-attribute
HAS_FASTMCP = True
HAS_MCP module-attribute
HAS_MCP = True
__all__ module-attribute
__all__ = ['HAS_FASTMCP', 'HAS_MCP', 'CallToolResult', 'Client', 'ContentBlock', 'Tool']
executors

Executors for different tool providers and local functions.

Classes:

Name Description
ComposioMixin
KwargsResolverMixin
LangChainMixin
LocalExecMixin
MCPMixin

Attributes:

Name Type Description
logger
Attributes
logger module-attribute
logger = getLogger(__name__)
Classes
ComposioMixin

Attributes:

Name Type Description
composio_tools list[str]
Source code in pyagenity/graph/tool_node/executors.py
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
class ComposioMixin:
    _composio: ComposioAdapter | None
    composio_tools: list[str]

    async def _get_composio_tools(self) -> list[dict]:
        tools: list[dict] = []
        if not self._composio:
            return tools
        try:
            raw = self._composio.list_raw_tools_for_llm()
            for tdef in raw:
                fn = tdef.get("function", {})
                name = fn.get("name")
                if name:
                    self.composio_tools.append(name)
                tools.append(tdef)
        except Exception as e:  # pragma: no cover - network/optional
            logger.exception("Failed to fetch Composio tools: %s", e)
        return tools

    async def _composio_execute(  # noqa: PLR0915
        self,
        name: str,
        args: dict,
        tool_call_id: str,
        config: dict[str, t.Any],
        callback_mgr: CallbackManager,
    ) -> Message:
        context = CallbackContext(
            invocation_type=InvocationType.TOOL,
            node_name="ToolNode",
            function_name=name,
            metadata={
                "tool_call_id": tool_call_id,
                "args": args,
                "config": config,
                "composio": True,
            },
        )
        meta = {"function_name": name, "function_argument": args, "tool_call_id": tool_call_id}

        event = EventModel.default(
            base_config=config,
            data={
                "tool_call_id": tool_call_id,
                "args": args,
                "function_name": name,
                "is_composio": True,
            },
            content_type=[ContentType.TOOL_CALL],
            event=Event.TOOL_EXECUTION,
        )
        event.event_type = EventType.PROGRESS
        event.node_name = "ToolNode"
        event.sequence_id = 1
        publish_event(event)

        input_data = {**args}

        def safe_serialize(obj: t.Any) -> dict[str, t.Any]:
            try:
                json.dumps(obj)
                return obj if isinstance(obj, dict) else {"content": obj}
            except (TypeError, OverflowError):
                if hasattr(obj, "model_dump"):
                    dumped = obj.model_dump()  # type: ignore
                    if isinstance(dumped, dict) and dumped.get("type") == "resource":
                        resource = dumped.get("resource", {})
                        if isinstance(resource, dict) and "uri" in resource:
                            resource["uri"] = str(resource["uri"])
                            dumped["resource"] = resource
                    return dumped
                return {"content": str(obj), "type": "fallback"}

        try:
            input_data = await callback_mgr.execute_before_invoke(context, input_data)
            event.event_type = EventType.UPDATE
            event.sequence_id = 2
            event.metadata["status"] = "before_invoke_complete Invoke Composio"
            publish_event(event)

            comp_conf = (config.get("composio") if isinstance(config, dict) else None) or {}
            user_id = comp_conf.get("user_id") or config.get("user_id")
            connected_account_id = comp_conf.get("connected_account_id") or config.get(
                "connected_account_id"
            )

            if not self._composio:
                error_result = Message.tool_message(
                    content=[
                        ErrorBlock(message="Composio adapter not configured"),
                        ToolResultBlock(
                            call_id=tool_call_id,
                            output="Composio adapter not configured",
                            status="failed",
                            is_error=True,
                        ),
                    ],
                    meta=meta,
                )
                event.event_type = EventType.ERROR
                event.metadata["error"] = "Composio adapter not configured"
                publish_event(event)
                return error_result

            res = self._composio.execute(
                slug=name,
                arguments=input_data,
                user_id=user_id,
                connected_account_id=connected_account_id,
            )

            successful = bool(res.get("successful"))
            payload = res.get("data")
            error = res.get("error")

            result_blocks = []
            if error and not successful:
                result_blocks.append(
                    ToolResultBlock(
                        call_id=tool_call_id,
                        output={"success": False, "error": error},
                        status="failed",
                        is_error=True,
                    )
                )
                result_blocks.append(ErrorBlock(message=error))
            else:
                if isinstance(payload, list):
                    output = [safe_serialize(item) for item in payload]
                else:
                    output = [safe_serialize(payload)]
                result_blocks.append(
                    ToolResultBlock(
                        call_id=tool_call_id,
                        output=output,
                        status="completed" if successful else "failed",
                        is_error=not successful,
                    )
                )

            result = Message.tool_message(
                content=result_blocks,
                meta=meta,
            )

            res_msg = await callback_mgr.execute_after_invoke(context, input_data, result)
            event.event_type = EventType.END
            event.data["message"] = result.model_dump()
            event.metadata["status"] = "Composio tool execution complete"
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            return res_msg

        except Exception as e:  # pragma: no cover - error path
            recovery_result = await callback_mgr.execute_on_error(context, input_data, e)
            if isinstance(recovery_result, Message):
                event.event_type = EventType.END
                event.data["message"] = recovery_result.model_dump()
                event.metadata["status"] = "Composio tool execution complete, with recovery"
                event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
                publish_event(event)
                return recovery_result

            event.event_type = EventType.END
            event.data["error"] = str(e)
            event.metadata["status"] = "Composio tool execution complete, with error"
            event.content_type = [ContentType.TOOL_RESULT, ContentType.ERROR]
            publish_event(event)
            return Message.tool_message(
                content=[
                    ToolResultBlock(
                        call_id=tool_call_id,
                        output=f"Composio execution error: {e}",
                        status="failed",
                        is_error=True,
                    ),
                    ErrorBlock(message=f"Composio execution error: {e}"),
                ],
                meta=meta,
            )
Attributes
composio_tools instance-attribute
composio_tools
KwargsResolverMixin
Source code in pyagenity/graph/tool_node/executors.py
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
class KwargsResolverMixin:
    def _should_skip_parameter(self, param: inspect.Parameter) -> bool:
        return param.kind in (
            inspect.Parameter.VAR_POSITIONAL,
            inspect.Parameter.VAR_KEYWORD,
        )

    def _handle_injectable_parameter(
        self,
        p_name: str,
        param: inspect.Parameter,
        injectable_params: dict,
        dependency_container,
    ) -> t.Any | None:
        if p_name in injectable_params:
            injectable_value = injectable_params[p_name]
            if injectable_value is not None:
                return injectable_value

        if dependency_container and dependency_container.has(p_name):
            return dependency_container.get(p_name)

        if param.default is inspect._empty:
            raise TypeError(f"Required injectable parameter '{p_name}' not found")

        return None

    def _get_parameter_value(
        self,
        p_name: str,
        param: inspect.Parameter,
        args: dict,
        injectable_params: dict,
        dependency_container,
    ) -> t.Any | None:
        if p_name in injectable_params:
            return self._handle_injectable_parameter(
                p_name, param, injectable_params, dependency_container
            )

        value_sources = [
            lambda: args.get(p_name),
            lambda: (
                dependency_container.get(p_name)
                if dependency_container and dependency_container.has(p_name)
                else None
            ),
        ]

        for source in value_sources:
            value = source()
            if value is not None:
                return value

        if param.default is not inspect._empty:
            return None

        raise TypeError(f"Missing required parameter '{p_name}' for function")

    def _prepare_kwargs(
        self,
        sig: inspect.Signature,
        args: dict,
        injectable_params: dict,
        dependency_container,
    ) -> dict:
        kwargs: dict = {}
        for p_name, p in sig.parameters.items():
            if self._should_skip_parameter(p):
                continue
            value = self._get_parameter_value(
                p_name, p, args, injectable_params, dependency_container
            )
            if value is not None:
                kwargs[p_name] = value
        return kwargs
LangChainMixin

Attributes:

Name Type Description
langchain_tools list[str]
Source code in pyagenity/graph/tool_node/executors.py
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
class LangChainMixin:
    _langchain: t.Any | None
    langchain_tools: list[str]

    async def _get_langchain_tools(self) -> list[dict]:
        tools: list[dict] = []
        if not self._langchain:
            return tools
        try:
            raw = self._langchain.list_tools_for_llm()
            for tdef in raw:
                fn = tdef.get("function", {})
                name = fn.get("name")
                if name:
                    self.langchain_tools.append(name)
                tools.append(tdef)
        except Exception as e:  # pragma: no cover - optional
            logger.warning("Failed to fetch LangChain tools: %s", e)
        return tools

    async def _langchain_execute(  # noqa: PLR0915
        self,
        name: str,
        args: dict,
        tool_call_id: str,
        config: dict[str, t.Any],
        callback_mgr: CallbackManager,
    ) -> Message:
        context = CallbackContext(
            invocation_type=InvocationType.TOOL,
            node_name="ToolNode",
            function_name=name,
            metadata={
                "tool_call_id": tool_call_id,
                "args": args,
                "config": config,
                "langchain": True,
            },
        )
        meta = {"function_name": name, "function_argument": args, "tool_call_id": tool_call_id}

        event = EventModel.default(
            base_config=config,
            data={
                "tool_call_id": tool_call_id,
                "args": args,
                "function_name": name,
                "is_langchain": True,
            },
            content_type=[ContentType.TOOL_CALL],
            event=Event.TOOL_EXECUTION,
        )
        event.event_type = EventType.PROGRESS
        event.node_name = "ToolNode"
        event.sequence_id = 1
        publish_event(event)

        input_data = {**args}

        def safe_serialize(obj: t.Any) -> dict[str, t.Any]:
            try:
                json.dumps(obj)
                return obj if isinstance(obj, dict) else {"content": obj}
            except (TypeError, OverflowError):
                if hasattr(obj, "model_dump"):
                    dumped = obj.model_dump()  # type: ignore
                    if isinstance(dumped, dict) and dumped.get("type") == "resource":
                        resource = dumped.get("resource", {})
                        if isinstance(resource, dict) and "uri" in resource:
                            resource["uri"] = str(resource["uri"])
                            dumped["resource"] = resource
                    return dumped
                return {"content": str(obj), "type": "fallback"}

        try:
            input_data = await callback_mgr.execute_before_invoke(context, input_data)
            event.event_type = EventType.UPDATE
            event.sequence_id = 2
            event.metadata["status"] = "before_invoke_complete Invoke LangChain"
            publish_event(event)

            if not self._langchain:
                error_result = Message.tool_message(
                    content=[
                        ErrorBlock(message="LangChain adapter not configured"),
                        ToolResultBlock(
                            call_id=tool_call_id,
                            output="LangChain adapter not configured",
                            status="failed",
                            is_error=True,
                        ),
                    ],
                    meta=meta,
                )
                event.event_type = EventType.ERROR
                event.metadata["error"] = "LangChain adapter not configured"
                publish_event(event)
                return error_result

            res = self._langchain.execute(name=name, arguments=input_data)
            successful = bool(res.get("successful"))
            payload = res.get("data")
            error = res.get("error")

            result_blocks = []
            if error and not successful:
                result_blocks.append(
                    ToolResultBlock(
                        call_id=tool_call_id,
                        output={"success": False, "error": error},
                        status="failed",
                        is_error=True,
                    )
                )
                result_blocks.append(ErrorBlock(message=error))
            else:
                if isinstance(payload, list):
                    output = [safe_serialize(item) for item in payload]
                else:
                    output = [safe_serialize(payload)]
                result_blocks.append(
                    ToolResultBlock(
                        call_id=tool_call_id,
                        output=output,
                        status="completed" if successful else "failed",
                        is_error=not successful,
                    )
                )

            result = Message.tool_message(
                content=result_blocks,
                meta=meta,
            )

            res_msg = await callback_mgr.execute_after_invoke(context, input_data, result)
            event.event_type = EventType.END
            event.data["message"] = result.model_dump()
            event.metadata["status"] = "LangChain tool execution complete"
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)
            return res_msg

        except Exception as e:  # pragma: no cover - error path
            recovery_result = await callback_mgr.execute_on_error(context, input_data, e)
            if isinstance(recovery_result, Message):
                event.event_type = EventType.END
                event.data["message"] = recovery_result.model_dump()
                event.metadata["status"] = "LangChain tool execution complete, with recovery"
                event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
                publish_event(event)
                return recovery_result

            event.event_type = EventType.END
            event.data["error"] = str(e)
            event.metadata["status"] = "LangChain tool execution complete, with error"
            event.content_type = [ContentType.TOOL_RESULT, ContentType.ERROR]
            publish_event(event)
            return Message.tool_message(
                content=[
                    ToolResultBlock(
                        call_id=tool_call_id,
                        output=f"LangChain execution error: {e}",
                        status="failed",
                        is_error=True,
                    ),
                    ErrorBlock(message=f"LangChain execution error: {e}"),
                ],
                meta=meta,
            )
Attributes
langchain_tools instance-attribute
langchain_tools
LocalExecMixin
Source code in pyagenity/graph/tool_node/executors.py
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
class LocalExecMixin:
    _funcs: dict[str, t.Callable]

    def _prepare_input_data_tool(
        self,
        fn: t.Callable,
        name: str,
        args: dict,
        default_data: dict,
    ) -> dict:
        sig = inspect.signature(fn)
        input_data = {}
        for param_name, param in sig.parameters.items():
            if param.kind in (
                inspect.Parameter.VAR_POSITIONAL,
                inspect.Parameter.VAR_KEYWORD,
            ):
                continue

            if param_name in ["state", "config", "tool_call_id"]:
                input_data[param_name] = default_data[param_name]
                continue

            if param_name in INJECTABLE_PARAMS:
                continue

            if (
                hasattr(param, "default")
                and param.default is not inspect._empty
                and hasattr(param.default, "__class__")
            ):
                try:
                    if "Inject" in str(type(param.default)):
                        logger.debug(
                            "Skipping injectable parameter '%s' with Inject syntax",
                            param_name,
                        )
                        continue
                except Exception as exc:  # pragma: no cover - defensive
                    logger.exception("Inject detection failed for '%s': %s", param_name, exc)

            if param_name in args:
                input_data[param_name] = args[param_name]
            elif param.default is inspect.Parameter.empty:
                raise TypeError(f"Missing required parameter '{param_name}' for function '{name}'")

        return input_data

    async def _internal_execute(  # noqa: PLR0915
        self,
        name: str,
        args: dict,
        tool_call_id: str,
        config: dict[str, t.Any],
        state: AgentState,
        callback_mgr: CallbackManager,
    ) -> Message:
        context = CallbackContext(
            invocation_type=InvocationType.TOOL,
            node_name="ToolNode",
            function_name=name,
            metadata={"tool_call_id": tool_call_id, "args": args, "config": config},
        )

        fn = self._funcs[name]
        input_data = self._prepare_input_data_tool(
            fn,
            name,
            args,
            {
                "tool_call_id": tool_call_id,
                "state": state,
                "config": config,
            },
        )

        meta = {
            "function_name": name,
            "function_argument": args,
            "tool_call_id": tool_call_id,
        }

        event = EventModel.default(
            base_config=config,
            data={
                "tool_call_id": tool_call_id,
                "args": args,
                "function_name": name,
                "is_mcp": False,
            },
            content_type=[ContentType.TOOL_CALL],
            event=Event.TOOL_EXECUTION,
        )
        event.event_type = EventType.PROGRESS
        event.node_name = "ToolNode"
        event.sequence_id = 1
        publish_event(event)

        def safe_serialize(obj: t.Any) -> dict[str, t.Any]:
            try:
                json.dumps(obj)
                return obj if isinstance(obj, dict) else {"content": obj}
            except (TypeError, OverflowError):
                if hasattr(obj, "model_dump"):
                    dumped = obj.model_dump()  # type: ignore
                    if isinstance(dumped, dict) and dumped.get("type") == "resource":
                        resource = dumped.get("resource", {})
                        if isinstance(resource, dict) and "uri" in resource:
                            resource["uri"] = str(resource["uri"])
                            dumped["resource"] = resource
                    return dumped
                return {"content": str(obj), "type": "fallback"}

        try:
            input_data = await callback_mgr.execute_before_invoke(context, input_data)

            event.event_type = EventType.UPDATE
            event.sequence_id = 2
            event.metadata["status"] = "before_invoke_complete Invoke internal"
            publish_event(event)

            result = await call_sync_or_async(fn, **input_data)

            result = await callback_mgr.execute_after_invoke(
                context,
                input_data,
                result,
            )

            if isinstance(result, Message):
                meta_data = result.metadata or {}
                meta.update(meta_data)
                result.metadata = meta

                event.event_type = EventType.END
                event.data["message"] = result.model_dump()
                event.metadata["status"] = "Internal tool execution complete"
                event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
                publish_event(event)
                return result

            result_blocks = []
            if isinstance(result, str):
                result_blocks.append(
                    ToolResultBlock(
                        call_id=tool_call_id,
                        output=result,
                        status="completed",
                        is_error=False,
                    )
                )
            elif isinstance(result, dict):
                result_blocks.append(
                    ToolResultBlock(
                        call_id=tool_call_id,
                        output=[safe_serialize(result)],
                        status="completed",
                        is_error=False,
                    )
                )
            elif hasattr(result, "model_dump"):
                result_blocks.append(
                    ToolResultBlock(
                        call_id=tool_call_id,
                        output=[safe_serialize(result.model_dump())],
                        status="completed",
                        is_error=False,
                    )
                )
            elif hasattr(result, "__dict__"):
                result_blocks.append(
                    ToolResultBlock(
                        call_id=tool_call_id,
                        output=[safe_serialize(result.__dict__)],
                        status="completed",
                        is_error=False,
                    )
                )
            elif isinstance(result, list):
                output = [safe_serialize(item) for item in result]
                result_blocks.append(
                    ToolResultBlock(
                        call_id=tool_call_id,
                        output=output,
                        status="completed",
                        is_error=False,
                    )
                )
            else:
                result_blocks.append(
                    ToolResultBlock(
                        call_id=tool_call_id,
                        output=str(result),
                        status="completed",
                        is_error=False,
                    )
                )

            msg = Message.tool_message(
                content=result_blocks,
                meta=meta,
            )

            event.event_type = EventType.END
            event.data["message"] = msg.model_dump()
            event.metadata["status"] = "Internal tool execution complete"
            event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
            publish_event(event)

            return msg

        except Exception as e:  # pragma: no cover - error path
            recovery_result = await callback_mgr.execute_on_error(context, input_data, e)

            if isinstance(recovery_result, Message):
                event.event_type = EventType.END
                event.data["message"] = recovery_result.model_dump()
                event.metadata["status"] = "Internal tool execution complete, with recovery"
                event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
                publish_event(event)
                return recovery_result

            event.event_type = EventType.END
            event.data["error"] = str(e)
            event.metadata["status"] = "Internal tool execution complete, with error"
            event.content_type = [ContentType.TOOL_RESULT, ContentType.ERROR]
            publish_event(event)

            return Message.tool_message(
                content=[
                    ToolResultBlock(
                        call_id=tool_call_id,
                        output=f"Internal execution error: {e}",
                        status="failed",
                        is_error=True,
                    ),
                    ErrorBlock(message=f"Internal execution error: {e}"),
                ],
                meta=meta,
            )
MCPMixin

Attributes:

Name Type Description
mcp_tools list[str]
Source code in pyagenity/graph/tool_node/executors.py
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
class MCPMixin:
    _client: t.Any | None
    # The concrete ToolNode defines this
    mcp_tools: list[str]  # type: ignore[assignment]

    def _serialize_result(
        self,
        tool_call_id: str,
        res: t.Any,
    ) -> list[ContentBlock]:
        def safe_serialize(obj: t.Any) -> dict[str, t.Any]:
            try:
                json.dumps(obj)
                return obj if isinstance(obj, dict) else {"content": obj}
            except (TypeError, OverflowError):
                if hasattr(obj, "model_dump"):
                    dumped = obj.model_dump()  # type: ignore
                    if isinstance(dumped, dict) and dumped.get("type") == "resource":
                        resource = dumped.get("resource", {})
                        if isinstance(resource, dict) and "uri" in resource:
                            resource["uri"] = str(resource["uri"])
                            dumped["resource"] = resource
                    return dumped
                return {"content": str(obj), "type": "fallback"}

        for source in [
            getattr(res, "content", None),
            getattr(res, "structured_content", None),
            getattr(res, "data", None),
        ]:
            if source is None:
                continue
            try:
                if isinstance(source, list):
                    result = [safe_serialize(item) for item in source]
                else:
                    result = [safe_serialize(source)]

                return [
                    ToolResultBlock(
                        call_id=tool_call_id,
                        output=result,
                        is_error=False,
                        status="completed",
                    )
                ]
            except Exception as e:  # pragma: no cover - defensive
                logger.exception("Serialization failure: %s", e)
                continue

        return [
            ToolResultBlock(
                call_id=tool_call_id,
                output=[
                    {
                        "content": str(res),
                        "type": "fallback",
                    }
                ],
                is_error=False,
                status="completed",
            )
        ]

    async def _get_mcp_tool(self) -> list[dict]:
        tools: list[dict] = []
        if self._client:
            async with self._client:
                res = await self._client.ping()
                if not res:
                    return tools
                mcp_tools: list[t.Any] = await self._client.list_tools()
                for i in mcp_tools:
                    # attribute provided by concrete ToolNode
                    self.mcp_tools.append(i.name)  # type: ignore[attr-defined]
                    tools.append(
                        {
                            "type": "function",
                            "function": {
                                "name": i.name,
                                "description": i.description,
                                "parameters": i.inputSchema,
                            },
                        }
                    )
        return tools

    async def _mcp_execute(
        self,
        name: str,
        args: dict,
        tool_call_id: str,
        config: dict[str, t.Any],
        callback_mgr: CallbackManager,
    ) -> Message:
        context = CallbackContext(
            invocation_type=InvocationType.MCP,
            node_name="ToolNode",
            function_name=name,
            metadata={
                "tool_call_id": tool_call_id,
                "args": args,
                "config": config,
                "mcp_client": bool(self._client),
            },
        )

        meta = {
            "function_name": name,
            "function_argument": args,
            "tool_call_id": tool_call_id,
        }

        event = EventModel.default(
            base_config=config,
            data={
                "tool_call_id": tool_call_id,
                "args": args,
                "function_name": name,
                "is_mcp": True,
            },
            content_type=[ContentType.TOOL_CALL],
            event=Event.TOOL_EXECUTION,
        )
        event.event_type = EventType.PROGRESS
        event.node_name = "ToolNode"
        event.sequence_id = 1
        publish_event(event)

        input_data = {**args}

        try:
            input_data = await callback_mgr.execute_before_invoke(context, input_data)
            event.event_type = EventType.UPDATE
            event.sequence_id = 2
            event.metadata["status"] = "before_invoke_complete Invoke MCP"
            publish_event(event)

            if not self._client:
                error_result = Message.tool_message(
                    content=[
                        ErrorBlock(
                            message="No MCP client configured",
                        ),
                        ToolResultBlock(
                            call_id=tool_call_id,
                            output="No MCP client configured",
                            is_error=True,
                            status="failed",
                        ),
                    ],
                    meta=meta,
                )
                res = await callback_mgr.execute_after_invoke(context, input_data, error_result)
                event.event_type = EventType.ERROR
                event.metadata["error"] = "No MCP client configured"
                publish_event(event)
                return res

            async with self._client:
                if not await self._client.ping():
                    error_result = Message.tool_message(
                        content=[
                            ErrorBlock(message="MCP Server not available. Ping failed."),
                            ToolResultBlock(
                                call_id=tool_call_id,
                                output="MCP Server not available. Ping failed.",
                                is_error=True,
                                status="failed",
                            ),
                        ],
                        meta=meta,
                    )
                    event.event_type = EventType.ERROR
                    event.metadata["error"] = "MCP server not available, ping failed"
                    publish_event(event)
                    return await callback_mgr.execute_after_invoke(
                        context, input_data, error_result
                    )

                res: t.Any = await self._client.call_tool(name, input_data)

                final_res = self._serialize_result(tool_call_id, res)

                result = Message.tool_message(
                    content=final_res,
                    meta=meta,
                )

                res = await callback_mgr.execute_after_invoke(context, input_data, result)
                event.event_type = EventType.END
                event.data["message"] = result.model_dump()
                event.metadata["status"] = "MCP tool execution complete"
                event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
                publish_event(event)
                return res

        except Exception as e:  # pragma: no cover - error path
            recovery_result = await callback_mgr.execute_on_error(context, input_data, e)

            if isinstance(recovery_result, Message):
                event.event_type = EventType.END
                event.data["message"] = recovery_result.model_dump()
                event.metadata["status"] = "MCP tool execution complete, with recovery"
                event.content_type = [ContentType.TOOL_RESULT, ContentType.MESSAGE]
                publish_event(event)
                return recovery_result

            event.event_type = EventType.END
            event.data["error"] = str(e)
            event.metadata["status"] = "MCP tool execution complete, with recovery"
            event.content_type = [ContentType.TOOL_RESULT, ContentType.ERROR]
            publish_event(event)

            return Message.tool_message(
                content=[
                    ToolResultBlock(
                        call_id=tool_call_id,
                        output=f"MCP execution error: {e}",
                        is_error=True,
                        status="failed",
                    ),
                    ErrorBlock(message=f"MCP execution error: {e}"),
                ],
                meta=meta,
            )
Attributes
mcp_tools instance-attribute
mcp_tools
Functions
schema

Schema utilities and local tool description building for ToolNode.

This module provides the SchemaMixin class which handles automatic schema generation for local Python functions, converting their type annotations and signatures into OpenAI-compatible function schemas. It supports various Python types including primitives, Optional types, List types, and Literal enums.

The schema generation process inspects function signatures and converts them to JSON Schema format suitable for use with language models and function calling APIs.

Classes:

Name Description
SchemaMixin

Mixin providing schema generation and local tool description building.

Attributes
Classes
SchemaMixin

Mixin providing schema generation and local tool description building.

This mixin provides functionality to automatically generate JSON Schema definitions from Python function signatures. It handles type annotation conversion, parameter analysis, and OpenAI-compatible function schema generation for local tools.

The mixin is designed to be used with ToolNode to automatically generate tool schemas without requiring manual schema definition for Python functions.

Attributes:

Name Type Description
_funcs dict[str, Callable]

Dictionary mapping function names to callable functions. This attribute is expected to be provided by the mixing class.

Methods:

Name Description
get_local_tool

Generate OpenAI-compatible tool definitions for all registered local functions.

Source code in pyagenity/graph/tool_node/schema.py
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
class SchemaMixin:
    """Mixin providing schema generation and local tool description building.

    This mixin provides functionality to automatically generate JSON Schema definitions
    from Python function signatures. It handles type annotation conversion, parameter
    analysis, and OpenAI-compatible function schema generation for local tools.

    The mixin is designed to be used with ToolNode to automatically generate tool
    schemas without requiring manual schema definition for Python functions.

    Attributes:
        _funcs: Dictionary mapping function names to callable functions. This
            attribute is expected to be provided by the mixing class.
    """

    _funcs: dict[str, t.Callable]

    @staticmethod
    def _handle_optional_annotation(annotation: t.Any, default: t.Any) -> dict | None:
        """Handle Optional type annotations and convert them to appropriate schemas.

        Processes Optional[T] type annotations (Union[T, None]) and generates
        schema for the non-None type. This method handles the common pattern
        of optional parameters in function signatures.

        Args:
            annotation: The type annotation to process, potentially an Optional type.
            default: The default value for the parameter, used for schema generation.

        Returns:
            Dictionary containing the JSON schema for the non-None type if the
            annotation is Optional, None otherwise.

        Example:
            Optional[str] -> {"type": "string"}
            Optional[int] -> {"type": "integer"}
        """
        args = getattr(annotation, "__args__", None)
        if args and any(a is type(None) for a in args):
            non_none = [a for a in args if a is not type(None)]
            if non_none:
                return SchemaMixin._annotation_to_schema(non_none[0], default)
        return None

    @staticmethod
    def _handle_complex_annotation(annotation: t.Any) -> dict:
        """Handle complex type annotations like List, Literal, and generic types.

        Processes generic type annotations that aren't simple primitive types,
        including container types like List and special types like Literal enums.
        Falls back to string type for unrecognized complex types.

        Args:
            annotation: The complex type annotation to process (e.g., List[str],
                Literal["a", "b", "c"]).

        Returns:
            Dictionary containing the appropriate JSON schema for the complex type.
            For List types, returns array schema with item type.
            For Literal types, returns enum schema with allowed values.
            For unknown types, returns string type as fallback.

        Example:
            List[str] -> {"type": "array", "items": {"type": "string"}}
            Literal["red", "green"] -> {"type": "string", "enum": ["red", "green"]}
        """
        origin = getattr(annotation, "__origin__", None)
        if origin is list:
            item_type = getattr(annotation, "__args__", (str,))[0]
            item_schema = SchemaMixin._annotation_to_schema(item_type, None)
            return {"type": "array", "items": item_schema}

        Literal = getattr(t, "Literal", None)
        if Literal is not None and origin is Literal:
            literals = list(getattr(annotation, "__args__", ()))
            if all(isinstance(literal, str) for literal in literals):
                return {"type": "string", "enum": literals}
            return {"enum": literals}

        return {"type": "string"}

    @staticmethod
    def _annotation_to_schema(annotation: t.Any, default: t.Any) -> dict:
        """Convert a Python type annotation to JSON Schema format.

        Main entry point for type annotation conversion. Handles both simple
        and complex types by delegating to appropriate helper methods.
        Includes default value handling when present.

        Args:
            annotation: The Python type annotation to convert (e.g., str, int,
                Optional[str], List[int]).
            default: The default value for the parameter, included in schema
                if not inspect._empty.

        Returns:
            Dictionary containing the JSON schema representation of the type
            annotation, including default values where applicable.

        Example:
            str -> {"type": "string"}
            int -> {"type": "integer"}
            str with default "hello" -> {"type": "string", "default": "hello"}
        """
        schema = SchemaMixin._handle_optional_annotation(annotation, default)
        if schema:
            return schema

        primitive_mappings = {
            str: {"type": "string"},
            int: {"type": "integer"},
            float: {"type": "number"},
            bool: {"type": "boolean"},
        }

        if annotation in primitive_mappings:
            schema = primitive_mappings[annotation]
        else:
            schema = SchemaMixin._handle_complex_annotation(annotation)

        if default is not inspect._empty:
            schema["default"] = default

        return schema

    def get_local_tool(self) -> list[dict]:
        """Generate OpenAI-compatible tool definitions for all registered local functions.

        Inspects all registered functions in _funcs and automatically generates
        tool schemas by analyzing function signatures, type annotations, and docstrings.
        Excludes injectable parameters that are provided by the framework.

        Returns:
            List of tool definitions in OpenAI function calling format. Each
            definition includes the function name, description (from docstring),
            and complete parameter schema with types and required fields.

        Example:
            For a function:
            ```python
            def calculate(a: int, b: int, operation: str = "add") -> int:
                '''Perform arithmetic calculation.'''
                return a + b if operation == "add" else a - b
            ```

            Returns:
            ```python
            [
                {
                    "type": "function",
                    "function": {
                        "name": "calculate",
                        "description": "Perform arithmetic calculation.",
                        "parameters": {
                            "type": "object",
                            "properties": {
                                "a": {"type": "integer"},
                                "b": {"type": "integer"},
                                "operation": {"type": "string", "default": "add"},
                            },
                            "required": ["a", "b"],
                        },
                    },
                }
            ]
            ```

        Note:
            Parameters listed in INJECTABLE_PARAMS (like 'state', 'config',
            'tool_call_id') are automatically excluded from the generated schema
            as they are provided by the framework during execution.
        """
        tools: list[dict] = []
        for name, fn in self._funcs.items():
            sig = inspect.signature(fn)
            params_schema: dict = {"type": "object", "properties": {}, "required": []}

            for p_name, p in sig.parameters.items():
                if p.kind in (
                    inspect.Parameter.VAR_POSITIONAL,
                    inspect.Parameter.VAR_KEYWORD,
                ):
                    continue

                if p_name in INJECTABLE_PARAMS:
                    continue

                annotation = p.annotation if p.annotation is not inspect._empty else str
                prop = SchemaMixin._annotation_to_schema(annotation, p.default)
                params_schema["properties"][p_name] = prop

                if p.default is inspect._empty:
                    params_schema["required"].append(p_name)

            if not params_schema["required"]:
                params_schema.pop("required")

            description = inspect.getdoc(fn) or "No description provided."

            # provider = getattr(fn, "_py_tool_provider", None)
            # tags = getattr(fn, "_py_tool_tags", None)
            # capabilities = getattr(fn, "_py_tool_capabilities", None)

            entry = {
                "type": "function",
                "function": {
                    "name": name,
                    "description": description,
                    "parameters": params_schema,
                },
            }
            # meta: dict[str, t.Any] = {}
            # if provider:
            #     meta["provider"] = provider
            # if tags:
            #     meta["tags"] = tags
            # if capabilities:
            #     meta["capabilities"] = capabilities
            # if meta:
            #     entry["x-pyagenity"] = meta

            tools.append(entry)

        return tools
Functions
get_local_tool
get_local_tool()

Generate OpenAI-compatible tool definitions for all registered local functions.

Inspects all registered functions in _funcs and automatically generates tool schemas by analyzing function signatures, type annotations, and docstrings. Excludes injectable parameters that are provided by the framework.

Returns:

Type Description
list[dict]

List of tool definitions in OpenAI function calling format. Each

list[dict]

definition includes the function name, description (from docstring),

list[dict]

and complete parameter schema with types and required fields.

Example

For a function:

def calculate(a: int, b: int, operation: str = "add") -> int:
    '''Perform arithmetic calculation.'''
    return a + b if operation == "add" else a - b

Returns:

[
    {
        "type": "function",
        "function": {
            "name": "calculate",
            "description": "Perform arithmetic calculation.",
            "parameters": {
                "type": "object",
                "properties": {
                    "a": {"type": "integer"},
                    "b": {"type": "integer"},
                    "operation": {"type": "string", "default": "add"},
                },
                "required": ["a", "b"],
            },
        },
    }
]

Note

Parameters listed in INJECTABLE_PARAMS (like 'state', 'config', 'tool_call_id') are automatically excluded from the generated schema as they are provided by the framework during execution.

Source code in pyagenity/graph/tool_node/schema.py
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
def get_local_tool(self) -> list[dict]:
    """Generate OpenAI-compatible tool definitions for all registered local functions.

    Inspects all registered functions in _funcs and automatically generates
    tool schemas by analyzing function signatures, type annotations, and docstrings.
    Excludes injectable parameters that are provided by the framework.

    Returns:
        List of tool definitions in OpenAI function calling format. Each
        definition includes the function name, description (from docstring),
        and complete parameter schema with types and required fields.

    Example:
        For a function:
        ```python
        def calculate(a: int, b: int, operation: str = "add") -> int:
            '''Perform arithmetic calculation.'''
            return a + b if operation == "add" else a - b
        ```

        Returns:
        ```python
        [
            {
                "type": "function",
                "function": {
                    "name": "calculate",
                    "description": "Perform arithmetic calculation.",
                    "parameters": {
                        "type": "object",
                        "properties": {
                            "a": {"type": "integer"},
                            "b": {"type": "integer"},
                            "operation": {"type": "string", "default": "add"},
                        },
                        "required": ["a", "b"],
                    },
                },
            }
        ]
        ```

    Note:
        Parameters listed in INJECTABLE_PARAMS (like 'state', 'config',
        'tool_call_id') are automatically excluded from the generated schema
        as they are provided by the framework during execution.
    """
    tools: list[dict] = []
    for name, fn in self._funcs.items():
        sig = inspect.signature(fn)
        params_schema: dict = {"type": "object", "properties": {}, "required": []}

        for p_name, p in sig.parameters.items():
            if p.kind in (
                inspect.Parameter.VAR_POSITIONAL,
                inspect.Parameter.VAR_KEYWORD,
            ):
                continue

            if p_name in INJECTABLE_PARAMS:
                continue

            annotation = p.annotation if p.annotation is not inspect._empty else str
            prop = SchemaMixin._annotation_to_schema(annotation, p.default)
            params_schema["properties"][p_name] = prop

            if p.default is inspect._empty:
                params_schema["required"].append(p_name)

        if not params_schema["required"]:
            params_schema.pop("required")

        description = inspect.getdoc(fn) or "No description provided."

        # provider = getattr(fn, "_py_tool_provider", None)
        # tags = getattr(fn, "_py_tool_tags", None)
        # capabilities = getattr(fn, "_py_tool_capabilities", None)

        entry = {
            "type": "function",
            "function": {
                "name": name,
                "description": description,
                "parameters": params_schema,
            },
        }
        # meta: dict[str, t.Any] = {}
        # if provider:
        #     meta["provider"] = provider
        # if tags:
        #     meta["tags"] = tags
        # if capabilities:
        #     meta["capabilities"] = capabilities
        # if meta:
        #     entry["x-pyagenity"] = meta

        tools.append(entry)

    return tools

utils

Modules:

Name Description
handler_mixins

Shared mixins for graph and node handler classes.

invoke_handler
invoke_node_handler

InvokeNodeHandler utilities for PyAgenity agent graph execution.

stream_handler

Streaming graph execution handler for PyAgenity workflows.

stream_node_handler

Streaming node handler for PyAgenity graph workflows.

stream_utils

Streaming utility functions for PyAgenity graph workflows.

utils

Core utility functions for graph execution and state management.

Modules

handler_mixins

Shared mixins for graph and node handler classes.

This module provides lightweight mixins that add common functionality to handler classes without changing their core runtime behavior. The mixins follow the composition pattern to keep responsibilities explicit and allow handlers to inherit only the capabilities they need.

The mixins provide structured logging, configuration management, and other cross-cutting concerns that are commonly needed across different handler types. By using mixins, the core handler logic remains focused while gaining these shared capabilities.

Typical usage
class MyHandler(BaseLoggingMixin, InterruptConfigMixin):
    def __init__(self):
        self._set_interrupts(["node1"], ["node2"])
        self._log_start("Handler initialized")

Classes:

Name Description
BaseLoggingMixin

Provides structured logging helpers for handler classes.

InterruptConfigMixin

Manages interrupt configuration for graph-level execution handlers.

Classes
BaseLoggingMixin

Provides structured logging helpers for handler classes.

This mixin adds consistent logging capabilities to handler classes without requiring them to manage logger instances directly. It automatically creates loggers based on the module name and provides convenience methods for common logging operations.

The mixin is designed to be lightweight and non-intrusive, adding only logging functionality without affecting the core behavior of the handler.

Attributes:

Name Type Description
_logger Logger

Cached logger instance for the handler class.

Example
class MyHandler(BaseLoggingMixin):
    def process(self):
        self._log_start("Processing started")
        try:
            # Do work
            self._log_debug("Work completed successfully")
        except Exception as e:
            self._log_error("Processing failed: %s", e)
Source code in pyagenity/graph/utils/handler_mixins.py
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
class BaseLoggingMixin:
    """Provides structured logging helpers for handler classes.

    This mixin adds consistent logging capabilities to handler classes without
    requiring them to manage logger instances directly. It automatically creates
    loggers based on the module name and provides convenience methods for
    common logging operations.

    The mixin is designed to be lightweight and non-intrusive, adding only
    logging functionality without affecting the core behavior of the handler.

    Attributes:
        _logger: Cached logger instance for the handler class.

    Example:
        ```python
        class MyHandler(BaseLoggingMixin):
            def process(self):
                self._log_start("Processing started")
                try:
                    # Do work
                    self._log_debug("Work completed successfully")
                except Exception as e:
                    self._log_error("Processing failed: %s", e)
        ```
    """

    @property
    def _logger(self) -> logging.Logger:
        """Get or create a logger instance for this handler.

        Creates a logger using the handler's module name, providing consistent
        logging across different handler instances while maintaining proper
        logger hierarchy and configuration.

        Returns:
            Logger instance configured for this handler's module.
        """
        return logging.getLogger(getattr(self, "__module__", __name__))

    def _log_start(self, msg: str, *args: Any) -> None:
        """Log an informational message for process start/initialization.

        Args:
            msg: Log message format string.
            *args: Arguments for message formatting.
        """
        self._logger.info(msg, *args)

    def _log_debug(self, msg: str, *args: Any) -> None:
        """Log a debug message for detailed execution information.

        Args:
            msg: Log message format string.
            *args: Arguments for message formatting.
        """
        self._logger.debug(msg, *args)

    def _log_error(self, msg: str, *args: Any) -> None:
        """Log an error message for exceptional conditions.

        Args:
            msg: Log message format string.
            *args: Arguments for message formatting.
        """
        self._logger.error(msg, *args)
InterruptConfigMixin

Manages interrupt configuration for graph-level execution handlers.

This mixin provides functionality to store and manage interrupt points configuration for graph execution. Interrupts allow graph execution to be paused before or after specific nodes for debugging, human intervention, or checkpoint creation.

The mixin maintains separate lists for "before" and "after" interrupts, allowing fine-grained control over when graph execution should pause.

Attributes:

Name Type Description
interrupt_before list[str] | None

List of node names where execution should pause before node execution begins.

interrupt_after list[str] | None

List of node names where execution should pause after node execution completes.

Example
class GraphHandler(InterruptConfigMixin):
    def __init__(self):
        self._set_interrupts(
            interrupt_before=["approval_node"], interrupt_after=["data_processing"]
        )
Source code in pyagenity/graph/utils/handler_mixins.py
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
class InterruptConfigMixin:
    """Manages interrupt configuration for graph-level execution handlers.

    This mixin provides functionality to store and manage interrupt points
    configuration for graph execution. Interrupts allow graph execution to be
    paused before or after specific nodes for debugging, human intervention,
    or checkpoint creation.

    The mixin maintains separate lists for "before" and "after" interrupts,
    allowing fine-grained control over when graph execution should pause.

    Attributes:
        interrupt_before: List of node names where execution should pause
            before node execution begins.
        interrupt_after: List of node names where execution should pause
            after node execution completes.

    Example:
        ```python
        class GraphHandler(InterruptConfigMixin):
            def __init__(self):
                self._set_interrupts(
                    interrupt_before=["approval_node"], interrupt_after=["data_processing"]
                )
        ```
    """

    interrupt_before: list[str] | None
    interrupt_after: list[str] | None

    def _set_interrupts(
        self,
        interrupt_before: list[str] | None,
        interrupt_after: list[str] | None,
    ) -> None:
        """Configure interrupt points for graph execution control.

        Sets up the interrupt configuration for this handler, defining which
        nodes should trigger execution pauses. This method normalizes None
        values to empty lists for consistent handling.

        Args:
            interrupt_before: List of node names where execution should be
                interrupted before the node begins execution. Pass None to
                disable before-interrupts.
            interrupt_after: List of node names where execution should be
                interrupted after the node completes execution. Pass None to
                disable after-interrupts.

        Note:
            This method should be called during handler initialization to
            establish the interrupt configuration before graph execution begins.
        """
        self.interrupt_before = interrupt_before or []
        self.interrupt_after = interrupt_after or []
Attributes
interrupt_after instance-attribute
interrupt_after
interrupt_before instance-attribute
interrupt_before
invoke_handler

Classes:

Name Description
InvokeHandler

Attributes:

Name Type Description
StateT
logger
Attributes
StateT module-attribute
StateT = TypeVar('StateT', bound=AgentState)
logger module-attribute
logger = getLogger(__name__)
Classes
InvokeHandler

Bases: BaseLoggingMixin, InterruptConfigMixin

Methods:

Name Description
__init__
invoke

Execute the graph asynchronously with event publishing.

Attributes:

Name Type Description
edges list[Edge]
interrupt_after
interrupt_before
nodes dict[str, Node]
Source code in pyagenity/graph/utils/invoke_handler.py
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
class InvokeHandler[StateT: AgentState](
    BaseLoggingMixin,
    InterruptConfigMixin,
):
    @inject
    def __init__(
        self,
        nodes: dict[str, Node],
        edges: list[Edge],
        interrupt_before: list[str] | None = None,
        interrupt_after: list[str] | None = None,
    ):
        self.nodes: dict[str, Node] = nodes
        self.edges: list[Edge] = edges
        # Keep existing attributes for backward-compatibility
        self.interrupt_before = interrupt_before or []
        self.interrupt_after = interrupt_after or []
        # And set via mixin for a single source of truth
        self._set_interrupts(interrupt_before, interrupt_after)

    async def _check_interrupted(
        self,
        state: StateT,
        input_data: dict[str, Any],
        config: dict[str, Any],
    ) -> dict[str, Any]:
        if state.is_interrupted():
            logger.info(
                "Resuming from interrupted state at node '%s'", state.execution_meta.current_node
            )
            # This is a resume case - clear interrupt and merge input data
            if input_data:
                config["resume_data"] = input_data
                logger.debug("Added resume data with %d keys", len(input_data))
            state.clear_interrupt()
        elif not input_data.get("messages") and not state.context:
            # This is a fresh execution - validate input data
            error_msg = "Input data must contain 'messages' for new execution."
            logger.error(error_msg)
            raise ValueError(error_msg)
        else:
            logger.info(
                "Starting fresh execution with %d messages", len(input_data.get("messages", []))
            )

        return config

    async def _check_and_handle_interrupt(
        self,
        current_node: str,
        interrupt_type: str,
        state: StateT,
        config: dict[str, Any],
    ) -> bool:
        """Check for interrupts and save state if needed. Returns True if interrupted."""
        interrupt_nodes: list[str] = (
            self.interrupt_before if interrupt_type == "before" else self.interrupt_after
        ) or []

        if current_node in interrupt_nodes:
            status = (
                ExecutionStatus.INTERRUPTED_BEFORE
                if interrupt_type == "before"
                else ExecutionStatus.INTERRUPTED_AFTER
            )
            state.set_interrupt(
                current_node,
                f"interrupt_{interrupt_type}: {current_node}",
                status,
            )
            # Save state and interrupt
            await sync_data(
                state=state,
                config=config,
                messages=[],
                trim=True,
            )
            logger.debug("Node '%s' interrupted", current_node)
            return True

        logger.debug(
            "No interrupts found for node '%s', continuing execution",
            current_node,
        )
        return False

    async def _check_stop_requested(
        self,
        state: StateT,
        current_node: str,
        event: EventModel,
        messages: list[Message],
        config: dict[str, Any],
    ) -> bool:
        """Check if a stop has been requested externally."""
        state = await reload_state(config, state)  # type: ignore

        # Check if a stop was requested externally (e.g., frontend)
        if state.is_stopped_requested():
            logger.info(
                "Stop requested for thread '%s' at node '%s'",
                config.get("thread_id"),
                current_node,
            )
            state.set_interrupt(
                current_node,
                "stop_requested",
                ExecutionStatus.INTERRUPTED_AFTER,
                data={"source": "stop", "info": "requested via is_stopped_requested"},
            )
            await sync_data(state=state, config=config, messages=messages, trim=True)
            event.event_type = EventType.INTERRUPTED
            event.metadata["interrupted"] = "Stop"
            event.metadata["status"] = "Graph execution stopped by request"
            event.data["state"] = state.model_dump()
            publish_event(event)
            return True
        return False

    async def _execute_graph(  # noqa: PLR0912, PLR0915
        self,
        state: StateT,
        config: dict[str, Any],
    ) -> tuple[StateT, list[Message]]:
        """Execute the entire graph with support for interrupts and resuming."""
        logger.info(
            "Starting graph execution from node '%s' at step %d",
            state.execution_meta.current_node,
            state.execution_meta.step,
        )
        logger.debug("DEBUG: Current node value: %r", state.execution_meta.current_node)
        logger.debug("DEBUG: END constant value: %r", END)
        logger.debug("DEBUG: Are they equal? %s", state.execution_meta.current_node == END)
        messages: list[Message] = []
        max_steps = config.get("recursion_limit", 25)
        logger.debug("Max steps limit set to %d", max_steps)

        # get the last message from state as that is human message
        last_human_message = state.context[-1] if state.context else None
        if last_human_message and last_human_message.role != "user":
            msg = [msg for msg in reversed(state.context) if msg.role == "user"]
            last_human_message = msg[0] if msg else None

        if last_human_message:
            logger.debug("Last human message: %s", last_human_message.content)
            messages.append(last_human_message)

        # Get current execution info from state
        current_node = state.execution_meta.current_node
        step = state.execution_meta.step

        # Create event for graph execution
        event = EventModel.default(
            config,
            data={"state": state.model_dump()},
            event=Event.GRAPH_EXECUTION,
            content_type=[ContentType.STATE],
            node_name=current_node,
            extra={
                "current_node": current_node,
                "step": step,
                "max_steps": max_steps,
            },
        )

        try:
            while current_node != END and step < max_steps:
                logger.debug("Executing step %d at node '%s'", step, current_node)
                # Reload state in each iteration to get latest (in case of external updates)
                res = await self._check_stop_requested(
                    state,
                    current_node,
                    event,
                    messages,
                    config,
                )
                if res:
                    return state, messages

                # Update execution metadata
                state.set_current_node(current_node)
                state.execution_meta.step = step
                await call_realtime_sync(state, config)
                event.data["state"] = state.model_dump()
                event.metadata["step"] = step
                event.metadata["current_node"] = current_node
                event.event_type = EventType.PROGRESS
                publish_event(event)

                # Check for interrupt_before
                if await self._check_and_handle_interrupt(
                    current_node,
                    "before",
                    state,
                    config,
                ):
                    logger.info("Graph execution interrupted before node '%s'", current_node)
                    event.event_type = EventType.INTERRUPTED
                    event.metadata["interrupted"] = "Before"
                    event.metadata["status"] = "Graph execution interrupted before node execution"
                    event.data["interrupted"] = "Before"
                    publish_event(event)
                    return state, messages

                # Execute current node
                logger.debug("Executing node '%s'", current_node)
                node = self.nodes[current_node]

                # Publish node invocation event

                ###############################################
                ##### Node Execution Started ##################
                ###############################################

                result = await node.execute(config, state)  # type: ignore

                ###############################################
                ##### Node Execution Finished #################
                ###############################################

                logger.debug("Node '%s' execution completed", current_node)

                next_node = None

                # Process result and get next node
                if isinstance(result, list):
                    # If result is a list of Message, append to messages
                    messages.extend(result)
                    logger.debug(
                        "Node '%s' returned %d messages, total messages now %d",
                        current_node,
                        len(result),
                        len(messages),
                    )
                    # Add messages to state context so they're visible to subsequent nodes
                    state.context = add_messages(state.context, result)

                # No state change beyond adding messages, just advance to next node
                if isinstance(result, dict):
                    state = result.get("state", state)
                    next_node = result.get("next_node")
                    new_messages = result.get("messages", [])
                    if new_messages:
                        messages.extend(new_messages)
                        logger.debug(
                            "Node '%s' returned %d messages, total messages now %d",
                            current_node,
                            len(new_messages),
                            len(messages),
                        )

                logger.debug(
                    "Node result processed, next_node=%s, total_messages=%d",
                    next_node,
                    len(messages),
                )

                # Check stop again after node execution
                res = await self._check_stop_requested(
                    state,
                    current_node,
                    event,
                    messages,
                    config,
                )
                if res:
                    return state, messages

                # Call realtime sync after node execution (if state/messages changed)
                await call_realtime_sync(state, config)
                event.event_type = EventType.UPDATE
                event.data["state"] = state.model_dump()
                event.data["messages"] = [m.model_dump() for m in messages] if messages else []
                if messages:
                    lm = messages[-1]
                    event.content = lm.text() if isinstance(lm.content, list) else lm.content
                    if isinstance(lm.content, list):
                        event.content_blocks = lm.content
                event.content_type = [ContentType.STATE, ContentType.MESSAGE]
                publish_event(event)

                # Check for interrupt_after
                if await self._check_and_handle_interrupt(
                    current_node,
                    "after",
                    state,
                    config,
                ):
                    logger.info("Graph execution interrupted after node '%s'", current_node)
                    # For interrupt_after, advance to next node before pausing
                    if next_node is None:
                        next_node = get_next_node(current_node, state, self.edges)
                    state.set_current_node(next_node)

                    event.event_type = EventType.INTERRUPTED
                    event.data["interrupted"] = "After"
                    event.metadata["interrupted"] = "After"
                    event.data["state"] = state.model_dump()
                    publish_event(event)
                    return state, messages

                # Get next node (only if no explicit navigation from Command)
                if next_node is None:
                    current_node = get_next_node(current_node, state, self.edges)
                    logger.debug("Next node determined by graph logic: '%s'", current_node)
                else:
                    current_node = next_node
                    logger.debug("Next node determined by command: '%s'", current_node)

                # Check if we've reached the end after determining next node
                logger.debug("Checking if current_node '%s' == END '%s'", current_node, END)
                if current_node == END:
                    logger.info("Graph execution reached END node, completing")
                    break

                # Advance step after successful node execution
                step += 1
                state.advance_step()
                await call_realtime_sync(state, config)
                event.event_type = EventType.UPDATE

                event.metadata["State_Updated"] = "State Updated"
                event.data["state"] = state.model_dump()
                publish_event(event)

                if step >= max_steps:
                    error_msg = "Graph execution exceeded maximum steps"
                    logger.error(error_msg)
                    state.error(error_msg)
                    await call_realtime_sync(state, config)
                    event.event_type = EventType.ERROR
                    event.data["state"] = state.model_dump()
                    event.metadata["error"] = error_msg
                    event.metadata["step"] = step
                    event.metadata["current_node"] = current_node

                    publish_event(event)
                    raise GraphRecursionError(
                        f"Graph execution exceeded recursion limit: {max_steps}"
                    )

            # Execution completed successfully
            logger.info(
                "Graph execution completed successfully at node '%s' after %d steps",
                current_node,
                step,
            )
            state.complete()
            res = await sync_data(
                state=state,
                config=config,
                messages=messages,
                trim=True,
            )
            event.event_type = EventType.END
            event.data["state"] = state.model_dump()
            event.data["messages"] = [m.model_dump() for m in messages] if messages else []
            if messages:
                fm = messages[-1]
                event.content = fm.text() if isinstance(fm.content, list) else fm.content
                if isinstance(fm.content, list):
                    event.content_blocks = fm.content
            event.content_type = [ContentType.STATE, ContentType.MESSAGE]
            event.metadata["status"] = "Graph execution completed"
            event.metadata["step"] = step
            event.metadata["current_node"] = current_node
            event.metadata["is_context_trimmed"] = res

            publish_event(event)

            return state, messages

        except Exception as e:
            # Handle execution errors
            logger.exception("Graph execution failed: %s", e)
            state.error(str(e))

            # Publish error event
            event.event_type = EventType.ERROR
            event.metadata["error"] = str(e)
            event.data["state"] = state.model_dump()
            publish_event(event)

            await sync_data(
                state=state,
                config=config,
                messages=messages,
                trim=True,
            )
            raise

    async def invoke(
        self,
        input_data: dict[str, Any],
        config: dict[str, Any],
        default_state: StateT,
        response_granularity: ResponseGranularity = ResponseGranularity.LOW,
    ):
        """Execute the graph asynchronously with event publishing."""
        logger.info(
            "Starting asynchronous graph execution with %d input keys, granularity=%s",
            len(input_data) if input_data else 0,
            response_granularity,
        )
        input_data = input_data or {}

        # Load or initialize state
        logger.debug("Loading or creating state from input data")
        new_state = await load_or_create_state(
            input_data,
            config,
            default_state,
        )
        state: StateT = new_state  # type: ignore[assignment]
        logger.debug(
            "State loaded: interrupted=%s, current_node=%s, step=%d",
            state.is_interrupted(),
            state.execution_meta.current_node,
            state.execution_meta.step,
        )

        # Event publishing logic
        event = EventModel.default(
            config,
            data={"state": state.model_dump()},
            event=Event.GRAPH_EXECUTION,
            content_type=[ContentType.STATE],
            node_name=state.execution_meta.current_node,
            extra={
                "current_node": state.execution_meta.current_node,
                "step": state.execution_meta.step,
            },
        )
        event.event_type = EventType.START
        publish_event(event)

        # Check if this is a resume case
        config = await self._check_interrupted(state, input_data, config)

        event.event_type = EventType.UPDATE
        event.metadata["status"] = "Graph invoked"
        publish_event(event)

        try:
            logger.debug("Beginning graph execution")
            event.event_type = EventType.PROGRESS
            event.metadata["status"] = "Graph execution started"
            publish_event(event)

            final_state, messages = await self._execute_graph(state, config)
            logger.info("Graph execution completed with %d final messages", len(messages))

            event.event_type = EventType.END
            event.metadata["status"] = "Graph execution completed"
            event.data["state"] = final_state.model_dump()
            event.data["messages"] = [m.model_dump() for m in messages] if messages else []
            publish_event(event)

            return await parse_response(
                final_state,
                messages,
                response_granularity,
            )
        except Exception as e:
            logger.exception("Graph execution failed: %s", e)
            event.event_type = EventType.ERROR
            event.metadata["status"] = f"Graph execution failed: {e}"
            event.data["error"] = str(e)
            publish_event(event)
            raise
Attributes
edges instance-attribute
edges = edges
interrupt_after instance-attribute
interrupt_after = interrupt_after or []
interrupt_before instance-attribute
interrupt_before = interrupt_before or []
nodes instance-attribute
nodes = nodes
Functions
__init__
__init__(nodes, edges, interrupt_before=None, interrupt_after=None)
Source code in pyagenity/graph/utils/invoke_handler.py
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
@inject
def __init__(
    self,
    nodes: dict[str, Node],
    edges: list[Edge],
    interrupt_before: list[str] | None = None,
    interrupt_after: list[str] | None = None,
):
    self.nodes: dict[str, Node] = nodes
    self.edges: list[Edge] = edges
    # Keep existing attributes for backward-compatibility
    self.interrupt_before = interrupt_before or []
    self.interrupt_after = interrupt_after or []
    # And set via mixin for a single source of truth
    self._set_interrupts(interrupt_before, interrupt_after)
invoke async
invoke(input_data, config, default_state, response_granularity=ResponseGranularity.LOW)

Execute the graph asynchronously with event publishing.

Source code in pyagenity/graph/utils/invoke_handler.py
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
async def invoke(
    self,
    input_data: dict[str, Any],
    config: dict[str, Any],
    default_state: StateT,
    response_granularity: ResponseGranularity = ResponseGranularity.LOW,
):
    """Execute the graph asynchronously with event publishing."""
    logger.info(
        "Starting asynchronous graph execution with %d input keys, granularity=%s",
        len(input_data) if input_data else 0,
        response_granularity,
    )
    input_data = input_data or {}

    # Load or initialize state
    logger.debug("Loading or creating state from input data")
    new_state = await load_or_create_state(
        input_data,
        config,
        default_state,
    )
    state: StateT = new_state  # type: ignore[assignment]
    logger.debug(
        "State loaded: interrupted=%s, current_node=%s, step=%d",
        state.is_interrupted(),
        state.execution_meta.current_node,
        state.execution_meta.step,
    )

    # Event publishing logic
    event = EventModel.default(
        config,
        data={"state": state.model_dump()},
        event=Event.GRAPH_EXECUTION,
        content_type=[ContentType.STATE],
        node_name=state.execution_meta.current_node,
        extra={
            "current_node": state.execution_meta.current_node,
            "step": state.execution_meta.step,
        },
    )
    event.event_type = EventType.START
    publish_event(event)

    # Check if this is a resume case
    config = await self._check_interrupted(state, input_data, config)

    event.event_type = EventType.UPDATE
    event.metadata["status"] = "Graph invoked"
    publish_event(event)

    try:
        logger.debug("Beginning graph execution")
        event.event_type = EventType.PROGRESS
        event.metadata["status"] = "Graph execution started"
        publish_event(event)

        final_state, messages = await self._execute_graph(state, config)
        logger.info("Graph execution completed with %d final messages", len(messages))

        event.event_type = EventType.END
        event.metadata["status"] = "Graph execution completed"
        event.data["state"] = final_state.model_dump()
        event.data["messages"] = [m.model_dump() for m in messages] if messages else []
        publish_event(event)

        return await parse_response(
            final_state,
            messages,
            response_granularity,
        )
    except Exception as e:
        logger.exception("Graph execution failed: %s", e)
        event.event_type = EventType.ERROR
        event.metadata["status"] = f"Graph execution failed: {e}"
        event.data["error"] = str(e)
        publish_event(event)
        raise
Functions
invoke_node_handler

InvokeNodeHandler utilities for PyAgenity agent graph execution.

This module provides the InvokeNodeHandler class, which manages the invocation of node functions and tool nodes within the agent graph. It supports dependency injection, callback hooks, event publishing, and error recovery for both regular and tool-based nodes.

Classes:

Name Description
InvokeNodeHandler

Handles execution of node functions and tool nodes with DI and callbacks.

Usage

handler = InvokeNodeHandler(name, func, publisher) result = await handler.invoke(config, state)

Attributes:

Name Type Description
logger
Attributes
logger module-attribute
logger = getLogger(__name__)
Classes
InvokeNodeHandler

Bases: BaseLoggingMixin

Handles invocation of node functions and tool nodes in the agent graph.

Supports dependency injection, callback hooks, event publishing, and error recovery.

Parameters:

Name Type Description Default
name str

Name of the node.

required
func Callable | ToolNode

The function or ToolNode to execute.

required
publisher BasePublisher

Event publisher for execution events.

Inject[BasePublisher]

Methods:

Name Description
__init__
clear_signature_cache

Clear the function signature cache. Useful for testing or memory management.

invoke

Execute the node function or ToolNode with dependency injection and callback hooks.

Attributes:

Name Type Description
func
name
publisher
Source code in pyagenity/graph/utils/invoke_node_handler.py
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
class InvokeNodeHandler(BaseLoggingMixin):
    """
    Handles invocation of node functions and tool nodes in the agent graph.

    Supports dependency injection, callback hooks, event publishing, and error recovery.

    Args:
        name (str): Name of the node.
        func (Callable | ToolNode): The function or ToolNode to execute.
        publisher (BasePublisher, optional): Event publisher for execution events.
    """

    # Class-level cache for function signatures to avoid repeated inspection
    _signature_cache: dict[Callable, inspect.Signature] = {}

    @classmethod
    def clear_signature_cache(cls) -> None:
        """Clear the function signature cache. Useful for testing or memory management."""
        cls._signature_cache.clear()

    def __init__(
        self,
        name: str,
        func: Union[Callable, "ToolNode"],
        publisher: BasePublisher | None = Inject[BasePublisher],
    ):
        self.name = name
        self.func = func
        self.publisher = publisher

    async def _handle_single_tool(
        self,
        tool_call: dict[str, Any],
        state: AgentState,
        config: dict[str, Any],
    ) -> Message:
        """
        Execute a single tool call using the ToolNode.

        Args:
            tool_call (dict): Tool call specification.
            state (AgentState): Current agent state.
            config (dict): Node configuration.

        Returns:
            Message: Resulting message from tool execution.
        """
        function_name = tool_call.get("function", {}).get("name", "")
        function_args: dict = json.loads(tool_call.get("function", {}).get("arguments", "{}"))
        tool_call_id = tool_call.get("id", "")

        logger.info(
            "Node '%s' executing tool '%s' with %d arguments",
            self.name,
            function_name,
            len(function_args),
        )
        logger.debug("Tool arguments: %s", function_args)

        # Execute the tool function with injectable parameters
        tool_result = await self.func.invoke(  # type: ignore
            function_name,  # type: ignore
            function_args,
            tool_call_id=tool_call_id,
            state=state,
            config=config,
        )
        logger.debug("Node '%s' tool execution completed successfully", self.name)

        return tool_result

    async def _call_tools(
        self,
        last_message: Message,
        state: "AgentState",
        config: dict[str, Any],
    ) -> list[Message]:
        """
        Execute all tool calls present in the last message.

        Args:
            last_message (Message): The last message containing tool calls.
            state (AgentState): Current agent state.
            config (dict): Node configuration.

        Returns:
            list[Message]: List of messages from tool executions.

        Raises:
            NodeError: If no tool calls are present.
        """
        logger.debug("Node '%s' calling tools from message", self.name)
        result: list[Message] = []
        if (
            hasattr(last_message, "tools_calls")
            and last_message.tools_calls
            and len(last_message.tools_calls) > 0
        ):
            # Execute the first tool call for now
            tool_call = last_message.tools_calls[0]
            for tool_call in last_message.tools_calls:
                res = await self._handle_single_tool(
                    tool_call,
                    state,
                    config,
                )
                result.append(res)
        else:
            # No tool calls to execute, return available tools
            logger.exception("Node '%s': No tool calls to execute", self.name)
            raise NodeError("No tool calls to execute")

        return result

    def _get_cached_signature(self, func: Callable) -> inspect.Signature:
        """Get cached signature for a function, computing it if not cached."""
        if func not in self._signature_cache:
            self._signature_cache[func] = inspect.signature(func)
        return self._signature_cache[func]

    def _prepare_input_data(
        self,
        state: "AgentState",
        config: dict[str, Any],
    ) -> dict:
        """
        Prepare input data for function invocation, handling injectable parameters.
        Uses cached function signature to avoid repeated inspection overhead.

        Args:
            state (AgentState): Current agent state.
            config (dict): Node configuration.

        Returns:
            dict: Input data for function call.

        Raises:
            TypeError: If required parameters are missing.
        """
        # Use cached signature inspection for performance
        sig = self._get_cached_signature(self.func)  # type: ignore Tool node won't come here
        input_data = {}
        default_data = {
            "state": state,
            "config": config,
        }

        # # Get injectable parameters to determine which ones to exclude from manual passing
        # # Prepare function arguments (excluding injectable parameters)
        for param_name, param in sig.parameters.items():
            # Skip *args/**kwargs
            if param.kind in (
                inspect.Parameter.VAR_POSITIONAL,
                inspect.Parameter.VAR_KEYWORD,
            ):
                continue

            # check its state, config
            if param_name in ["state", "config"]:
                input_data[param_name] = default_data[param_name]
            # Include regular function arguments
            elif param.default is inspect.Parameter.empty:
                raise TypeError(
                    f"Missing required parameter '{param_name}' for function '{self.func}'"
                )

        return input_data

    async def _call_normal_node(
        self,
        state: "AgentState",
        config: dict[str, Any],
        callback_mgr: CallbackManager,
    ) -> dict[str, Any]:
        """
        Execute a regular node function with callback hooks and event publishing.

        Args:
            state (AgentState): Current agent state.
            config (dict): Node configuration.
            callback_mgr (CallbackManager): Callback manager for hooks.

        Returns:
            dict: Result containing new state, messages, and next node.

        Raises:
            Exception: If function execution fails and cannot be recovered.
        """
        logger.debug("Node '%s' calling normal function", self.name)
        result: dict[str, Any] = {}

        logger.debug("Node '%s' is a regular function, executing with callbacks", self.name)
        # This is a regular function - likely AI function
        # Create callback context for AI invocation
        context = CallbackContext(
            invocation_type=InvocationType.AI,
            node_name=self.name,
            function_name=getattr(self.func, "__name__", str(self.func)),
            metadata={"config": config},
        )

        # Event publishing logic (similar to stream_node_handler)

        input_data = self._prepare_input_data(
            state,
            config,
        )

        last_message = state.context[-1] if state.context and len(state.context) > 0 else None

        event = EventModel.default(
            config,
            data={"state": state.model_dump()},
            event=Event.NODE_EXECUTION,
            content_type=[ContentType.STATE],
            node_name=self.name,
            extra={
                "node": self.name,
                "function_name": getattr(self.func, "__name__", str(self.func)),
                "last_message": last_message.model_dump() if last_message else None,
            },
        )
        publish_event(event)

        try:
            logger.debug("Node '%s' executing before_invoke callbacks", self.name)
            # Execute before_invoke callbacks
            input_data = await callback_mgr.execute_before_invoke(context, input_data)
            logger.debug("Node '%s' executing function", self.name)
            event.event_type = EventType.PROGRESS
            event.metadata["status"] = "Function execution started"
            publish_event(event)

            # Execute the actual function
            result = await call_sync_or_async(
                self.func,  # type: ignore
                **input_data,
            )
            logger.debug("Node '%s' function execution completed", self.name)

            logger.debug("Node '%s' executing after_invoke callbacks", self.name)
            # Execute after_invoke callbacks
            result = await callback_mgr.execute_after_invoke(context, input_data, result)

            # Process result and publish END event
            messages = []
            new_state, messages, next_node = await process_node_result(result, state, messages)
            event.data["state"] = new_state.model_dump()
            event.event_type = EventType.END
            event.metadata["status"] = "Function execution completed"
            event.data["messages"] = [m.model_dump() for m in messages] if messages else []
            event.data["next_node"] = next_node
            # mirror simple content + structured blocks for the last message
            if messages:
                last = messages[-1]
                event.content = last.text() if isinstance(last.content, list) else last.content
                if isinstance(last.content, list):
                    event.content_blocks = last.content

            publish_event(event)

            return {
                "state": new_state,
                "messages": messages,
                "next_node": next_node,
            }

        except Exception as e:
            logger.warning(
                "Node '%s' execution failed, executing error callbacks: %s", self.name, e
            )
            # Execute error callbacks
            recovery_result = await callback_mgr.execute_on_error(context, input_data, e)

            if recovery_result is not None:
                logger.info(
                    "Node '%s' recovered from error using callback result",
                    self.name,
                )
                # Use recovery result instead of raising the error
                event.event_type = EventType.END
                event.metadata["status"] = "Function execution recovered from error"
                event.data["message"] = recovery_result.model_dump()
                event.content_type = [ContentType.MESSAGE, ContentType.STATE]
                publish_event(event)
                return {
                    "state": state,
                    "messages": [recovery_result],
                    "next_node": None,
                }
            # Re-raise the original error
            logger.error("Node '%s' could not recover from error", self.name)
            event.event_type = EventType.ERROR
            event.metadata["status"] = f"Function execution failed: {e}"
            event.data["error"] = str(e)
            event.content_type = [ContentType.ERROR, ContentType.STATE]
            publish_event(event)
            raise

    async def invoke(
        self,
        config: dict[str, Any],
        state: AgentState,
        callback_mgr: CallbackManager = Inject[CallbackManager],
    ) -> dict[str, Any] | list[Message]:
        """
        Execute the node function or ToolNode with dependency injection and callback hooks.

        Args:
            config (dict): Node configuration.
            state (AgentState): Current agent state.
            callback_mgr (CallbackManager, optional): Callback manager for hooks.

        Returns:
            dict | list[Message]: Result of node execution (regular node or tool node).

        Raises:
            NodeError: If execution fails or context is missing for tool nodes.
        """
        logger.info("Executing node '%s'", self.name)
        logger.debug(
            "Node '%s' execution with state context size=%d, config keys=%s",
            self.name,
            len(state.context) if state.context else 0,
            list(config.keys()) if config else [],
        )

        try:
            if isinstance(self.func, ToolNode):
                logger.debug("Node '%s' is a ToolNode, executing tool calls", self.name)
                # This is tool execution - handled separately in ToolNode
                if state.context and len(state.context) > 0:
                    last_message = state.context[-1]
                    logger.debug("Node '%s' processing tool calls from last message", self.name)
                    result = await self._call_tools(
                        last_message,
                        state,
                        config,
                    )
                else:
                    # No context, return available tools
                    error_msg = "No context available for tool execution"
                    logger.error("Node '%s': %s", self.name, error_msg)
                    raise NodeError(error_msg)

            else:
                result = await self._call_normal_node(
                    state,
                    config,
                    callback_mgr,
                )

            logger.info("Node '%s' execution completed successfully", self.name)
            return result
        except Exception as e:
            # This is the final catch-all for node execution errors
            logger.exception("Node '%s' execution failed: %s", self.name, e)
            raise NodeError(f"Error in node '{self.name}': {e!s}") from e
Attributes
func instance-attribute
func = func
name instance-attribute
name = name
publisher instance-attribute
publisher = publisher
Functions
__init__
__init__(name, func, publisher=Inject[BasePublisher])
Source code in pyagenity/graph/utils/invoke_node_handler.py
65
66
67
68
69
70
71
72
73
def __init__(
    self,
    name: str,
    func: Union[Callable, "ToolNode"],
    publisher: BasePublisher | None = Inject[BasePublisher],
):
    self.name = name
    self.func = func
    self.publisher = publisher
clear_signature_cache classmethod
clear_signature_cache()

Clear the function signature cache. Useful for testing or memory management.

Source code in pyagenity/graph/utils/invoke_node_handler.py
60
61
62
63
@classmethod
def clear_signature_cache(cls) -> None:
    """Clear the function signature cache. Useful for testing or memory management."""
    cls._signature_cache.clear()
invoke async
invoke(config, state, callback_mgr=Inject[CallbackManager])

Execute the node function or ToolNode with dependency injection and callback hooks.

Parameters:

Name Type Description Default
config dict

Node configuration.

required
state AgentState

Current agent state.

required
callback_mgr CallbackManager

Callback manager for hooks.

Inject[CallbackManager]

Returns:

Type Description
dict[str, Any] | list[Message]

dict | list[Message]: Result of node execution (regular node or tool node).

Raises:

Type Description
NodeError

If execution fails or context is missing for tool nodes.

Source code in pyagenity/graph/utils/invoke_node_handler.py
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
async def invoke(
    self,
    config: dict[str, Any],
    state: AgentState,
    callback_mgr: CallbackManager = Inject[CallbackManager],
) -> dict[str, Any] | list[Message]:
    """
    Execute the node function or ToolNode with dependency injection and callback hooks.

    Args:
        config (dict): Node configuration.
        state (AgentState): Current agent state.
        callback_mgr (CallbackManager, optional): Callback manager for hooks.

    Returns:
        dict | list[Message]: Result of node execution (regular node or tool node).

    Raises:
        NodeError: If execution fails or context is missing for tool nodes.
    """
    logger.info("Executing node '%s'", self.name)
    logger.debug(
        "Node '%s' execution with state context size=%d, config keys=%s",
        self.name,
        len(state.context) if state.context else 0,
        list(config.keys()) if config else [],
    )

    try:
        if isinstance(self.func, ToolNode):
            logger.debug("Node '%s' is a ToolNode, executing tool calls", self.name)
            # This is tool execution - handled separately in ToolNode
            if state.context and len(state.context) > 0:
                last_message = state.context[-1]
                logger.debug("Node '%s' processing tool calls from last message", self.name)
                result = await self._call_tools(
                    last_message,
                    state,
                    config,
                )
            else:
                # No context, return available tools
                error_msg = "No context available for tool execution"
                logger.error("Node '%s': %s", self.name, error_msg)
                raise NodeError(error_msg)

        else:
            result = await self._call_normal_node(
                state,
                config,
                callback_mgr,
            )

        logger.info("Node '%s' execution completed successfully", self.name)
        return result
    except Exception as e:
        # This is the final catch-all for node execution errors
        logger.exception("Node '%s' execution failed: %s", self.name, e)
        raise NodeError(f"Error in node '{self.name}': {e!s}") from e
Functions
stream_handler

Streaming graph execution handler for PyAgenity workflows.

This module provides the StreamHandler class, which manages the execution of graph workflows with support for streaming output, interrupts, state persistence, and event publishing. It enables incremental result processing, pause/resume capabilities, and robust error handling for agent workflows that require real-time or chunked responses.

Classes:

Name Description
StreamHandler

Handles streaming execution for graph workflows in PyAgenity.

Attributes:

Name Type Description
StateT
logger
Attributes
StateT module-attribute
StateT = TypeVar('StateT', bound=AgentState)
logger module-attribute
logger = getLogger(__name__)
Classes
StreamHandler

Bases: BaseLoggingMixin, InterruptConfigMixin

Handles streaming execution for graph workflows in PyAgenity.

StreamHandler manages the execution of agent workflows as directed graphs, supporting streaming output, pause/resume via interrupts, state persistence, and event publishing for monitoring and debugging. It enables incremental result processing and robust error handling for complex agent workflows.

Attributes:

Name Type Description
nodes dict[str, Node]

Dictionary mapping node names to Node instances.

edges list[Edge]

List of Edge instances defining graph connections and routing.

interrupt_before

List of node names where execution should pause before execution.

interrupt_after

List of node names where execution should pause after execution.

Example
handler = StreamHandler(nodes, edges)
async for chunk in handler.stream(input_data, config, state):
    print(chunk)

Methods:

Name Description
__init__
stream

Execute the graph asynchronously with streaming output.

Source code in pyagenity/graph/utils/stream_handler.py
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
class StreamHandler[StateT: AgentState](
    BaseLoggingMixin,
    InterruptConfigMixin,
):
    """Handles streaming execution for graph workflows in PyAgenity.

    StreamHandler manages the execution of agent workflows as directed graphs,
    supporting streaming output, pause/resume via interrupts, state persistence,
    and event publishing for monitoring and debugging. It enables incremental
    result processing and robust error handling for complex agent workflows.

    Attributes:
        nodes: Dictionary mapping node names to Node instances.
        edges: List of Edge instances defining graph connections and routing.
        interrupt_before: List of node names where execution should pause before execution.
        interrupt_after: List of node names where execution should pause after execution.

    Example:
        ```python
        handler = StreamHandler(nodes, edges)
        async for chunk in handler.stream(input_data, config, state):
            print(chunk)
        ```
    """

    @inject
    def __init__(
        self,
        nodes: dict[str, Node],
        edges: list[Edge],
        interrupt_before: list[str] | None = None,
        interrupt_after: list[str] | None = None,
    ):
        self.nodes: dict[str, Node] = nodes
        self.edges: list[Edge] = edges
        self.interrupt_before = interrupt_before or []
        self.interrupt_after = interrupt_after or []
        self._set_interrupts(interrupt_before, interrupt_after)

    async def _check_interrupted(
        self,
        state: StateT,
        input_data: dict[str, Any],
        config: dict[str, Any],
    ) -> dict[str, Any]:
        if state.is_interrupted():
            logger.info(
                "Resuming from interrupted state at node '%s'", state.execution_meta.current_node
            )
            # This is a resume case - clear interrupt and merge input data
            if input_data:
                config["resume_data"] = input_data
                logger.debug("Added resume data with %d keys", len(input_data))
            state.clear_interrupt()
        elif not input_data.get("messages") and not state.context:
            # This is a fresh execution - validate input data
            error_msg = "Input data must contain 'messages' for new execution."
            logger.error(error_msg)
            raise ValueError(error_msg)
        else:
            logger.info(
                "Starting fresh execution with %d messages", len(input_data.get("messages", []))
            )

        return config

    async def _check_and_handle_interrupt(
        self,
        current_node: str,
        interrupt_type: str,
        state: StateT,
        config: dict[str, Any],
    ) -> bool:
        """Check for interrupts and save state if needed. Returns True if interrupted."""
        interrupt_nodes: list[str] = (
            self.interrupt_before if interrupt_type == "before" else self.interrupt_after
        ) or []

        if current_node in interrupt_nodes:
            status = (
                ExecutionStatus.INTERRUPTED_BEFORE
                if interrupt_type == "before"
                else ExecutionStatus.INTERRUPTED_AFTER
            )
            state.set_interrupt(
                current_node,
                f"interrupt_{interrupt_type}: {current_node}",
                status,
            )
            # Save state and interrupt
            await sync_data(
                state=state,
                config=config,
                messages=[],
                trim=True,
            )
            logger.debug("Node '%s' interrupted", current_node)
            return True

        logger.debug(
            "No interrupts found for node '%s', continuing execution",
            current_node,
        )
        return False

    async def _check_stop_requested(
        self,
        state: StateT,
        current_node: str,
        event: EventModel,
        messages: list[Message],
        config: dict[str, Any],
    ) -> bool:
        """Check if a stop has been requested externally."""
        state = await reload_state(config, state)  # type: ignore

        # Check if a stop was requested externally (e.g., frontend)
        if state.is_stopped_requested():
            logger.info(
                "Stop requested for thread '%s' at node '%s'",
                config.get("thread_id"),
                current_node,
            )
            state.set_interrupt(
                current_node,
                "stop_requested",
                ExecutionStatus.INTERRUPTED_AFTER,
                data={"source": "stop", "info": "requested via is_stopped_requested"},
            )
            await sync_data(state=state, config=config, messages=messages, trim=True)
            event.event_type = EventType.INTERRUPTED
            event.metadata["interrupted"] = "Stop"
            event.metadata["status"] = "Graph execution stopped by request"
            event.data["state"] = state.model_dump()
            publish_event(event)
            return True
        return False

    async def _execute_graph(  # noqa: PLR0912, PLR0915
        self,
        state: StateT,
        input_data: dict[str, Any],
        config: dict[str, Any],
    ) -> AsyncIterable[Message]:
        """
        Execute the entire graph with support for interrupts and resuming.

        Why so many chunks are yielded?
        We allow user to set response type, if they want low granularity
        Only few chunks like Message will be sent to user
        """
        logger.info(
            "Starting graph execution from node '%s' at step %d",
            state.execution_meta.current_node,
            state.execution_meta.step,
        )
        messages: list[Message] = []
        messages_ids = set()
        max_steps = config.get("recursion_limit", 25)
        logger.debug("Max steps limit set to %d", max_steps)

        last_human_messages = input_data.get("messages", []) or []
        # Stream initial input messages (e.g., human messages) so callers see full conversation
        # Only emit when present and avoid duplicates by tracking message_ids and existing context
        for m in last_human_messages:
            if m.message_id not in messages_ids:
                messages.append(m)
                messages_ids.add(m.message_id)
                yield m

        # Get current execution info from state
        current_node = state.execution_meta.current_node
        step = state.execution_meta.step

        # Create event for graph execution
        event = EventModel.default(
            config,
            data={"state": state.model_dump(exclude={"execution_meta"})},
            content_type=[ContentType.STATE],
            extra={"step": step, "current_node": current_node},
            event=Event.GRAPH_EXECUTION,
            node_name=current_node,
        )

        try:
            while current_node != END and step < max_steps:
                logger.debug("Executing step %d at node '%s'", step, current_node)

                res = await self._check_stop_requested(
                    state,
                    current_node,
                    event,
                    messages,
                    config,
                )
                if res:
                    return

                # Update execution metadata
                state.set_current_node(current_node)
                state.execution_meta.step = step
                await call_realtime_sync(state, config)

                # Update event with current step info
                event.data["step"] = step
                event.data["current_node"] = current_node
                event.event_type = EventType.PROGRESS
                event.metadata["status"] = f"Executing step {step} at node '{current_node}'"
                publish_event(event)

                # Check for interrupt_before
                if await self._check_and_handle_interrupt(
                    current_node,
                    "before",
                    state,
                    config,
                ):
                    logger.info("Graph execution interrupted before node '%s'", current_node)
                    event.event_type = EventType.INTERRUPTED
                    event.metadata["status"] = "Graph execution interrupted before node execution"
                    event.metadata["interrupted"] = "Before"
                    event.data["interrupted"] = "Before"
                    publish_event(event)
                    return

                # Execute current node
                logger.debug("Executing node '%s'", current_node)
                node = self.nodes[current_node]

                # Node execution
                result = node.stream(config, state)  # type: ignore

                logger.debug("Node '%s' execution completed", current_node)

                res = await self._check_stop_requested(
                    state,
                    current_node,
                    event,
                    messages,
                    config,
                )
                if res:
                    return

                # Process result and get next node
                next_node = None
                async for rs in result:
                    # Allow stop to break inner result loop as well
                    if isinstance(rs, Message) and rs.delta:
                        # Yield delta messages immediately for streaming
                        yield rs

                    elif isinstance(rs, Message) and not rs.delta:
                        yield rs

                        if rs.message_id not in messages_ids:
                            messages.append(rs)
                            messages_ids.add(rs.message_id)

                    elif isinstance(rs, dict) and "is_non_streaming" in rs:
                        if rs["is_non_streaming"]:
                            state = rs.get("state", state)
                            new_messages = rs.get("messages", [])
                            for m in new_messages:
                                if m.message_id not in messages_ids and not m.delta:
                                    messages.append(m)
                                    messages_ids.add(m.message_id)
                                yield m
                            next_node = rs.get("next_node", next_node)
                        else:
                            # Streaming path completed: ensure any collected messages are persisted
                            new_messages = rs.get("messages", [])
                            for m in new_messages:
                                if m.message_id not in messages_ids and not m.delta:
                                    messages.append(m)
                                    messages_ids.add(m.message_id)
                                    yield m
                            next_node = rs.get("next_node", next_node)
                    else:
                        # Process as node result (non-streaming path)
                        try:
                            state, new_messages, next_node = await process_node_result(
                                rs,
                                state,
                                [],
                            )
                            for m in new_messages:
                                if m.message_id not in messages_ids and not m.delta:
                                    messages.append(m)
                                    messages_ids.add(m.message_id)
                                    state.context = add_messages(state.context, [m])
                                    yield m
                        except Exception as e:
                            logger.error("Failed to process node result: %s", e)

                logger.debug(
                    "Node result processed, next_node=%s, total_messages=%d",
                    next_node,
                    len(messages),
                )

                # Add collected messages to state context
                if messages:
                    state.context = add_messages(state.context, messages)
                    logger.debug("Added %d messages to state context", len(messages))

                # Call realtime sync after node execution
                await call_realtime_sync(state, config)
                event.event_type = EventType.UPDATE
                event.data["state"] = state.model_dump()
                event.data["messages"] = [m.model_dump() for m in messages] if messages else []
                if messages:
                    lm = messages[-1]
                    event.content = lm.text() if isinstance(lm.content, list) else lm.content
                    if isinstance(lm.content, list):
                        event.content_blocks = lm.content
                event.content_type = [ContentType.STATE, ContentType.MESSAGE]
                publish_event(event)

                # Check for interrupt_after
                if await self._check_and_handle_interrupt(
                    current_node,
                    "after",
                    state,
                    config,
                ):
                    logger.info("Graph execution interrupted after node '%s'", current_node)
                    # For interrupt_after, advance to next node before pausing
                    if next_node is None:
                        next_node = get_next_node(current_node, state, self.edges)
                    state.set_current_node(next_node)

                    event.event_type = EventType.INTERRUPTED
                    event.data["interrupted"] = "After"
                    event.metadata["interrupted"] = "After"
                    event.data["state"] = state.model_dump()
                    publish_event(event)
                    return

                # Get next node
                if next_node is None:
                    current_node = get_next_node(current_node, state, self.edges)
                    logger.debug("Next node determined by graph logic: '%s'", current_node)
                else:
                    current_node = next_node
                    logger.debug("Next node determined by command: '%s'", current_node)

                # Advance step after successful node execution
                step += 1
                state.advance_step()
                await call_realtime_sync(state, config)

                event.event_type = EventType.UPDATE
                event.metadata["State_Updated"] = "State Updated"
                event.data["state"] = state.model_dump()
                publish_event(event)

                if step >= max_steps:
                    error_msg = "Graph execution exceeded maximum steps"
                    logger.error(error_msg)
                    state.error(error_msg)
                    await call_realtime_sync(state, config)

                    event.event_type = EventType.ERROR
                    event.data["state"] = state.model_dump()
                    event.metadata["error"] = error_msg
                    event.metadata["step"] = step
                    event.metadata["current_node"] = current_node
                    publish_event(event)

                    yield Message(
                        role="assistant",
                        content=[ErrorBlock(text=error_msg)],  # type: ignore
                    )

                    raise GraphRecursionError(
                        f"Graph execution exceeded recursion limit: {max_steps}"
                    )

            # Execution completed successfully
            logger.info(
                "Graph execution completed successfully at node '%s' after %d steps",
                current_node,
                step,
            )
            state.complete()
            is_context_trimmed = await sync_data(
                state=state,
                config=config,
                messages=messages,
                trim=True,
            )

            # Create completion event
            event.event_type = EventType.END
            event.data["state"] = state.model_dump()
            event.data["messages"] = [m.model_dump() for m in messages] if messages else []
            if messages:
                fm = messages[-1]
                event.content = fm.text() if isinstance(fm.content, list) else fm.content
                if isinstance(fm.content, list):
                    event.content_blocks = fm.content
            event.content_type = [ContentType.STATE, ContentType.MESSAGE]
            event.metadata["status"] = "Graph execution completed"
            event.metadata["step"] = step
            event.metadata["current_node"] = current_node
            event.metadata["is_context_trimmed"] = is_context_trimmed
            publish_event(event)

        except Exception as e:
            # Handle execution errors
            logger.exception("Graph execution failed: %s", e)
            state.error(str(e))

            # Publish error event
            event.event_type = EventType.ERROR
            event.metadata["error"] = str(e)
            event.data["state"] = state.model_dump()
            publish_event(event)

            await sync_data(
                state=state,
                config=config,
                messages=messages,
                trim=True,
            )
            raise

    async def stream(
        self,
        input_data: dict[str, Any],
        config: dict[str, Any],
        default_state: StateT,
        response_granularity: ResponseGranularity = ResponseGranularity.LOW,
    ) -> AsyncGenerator[Message]:
        """Execute the graph asynchronously with streaming output.

        Runs the graph workflow from start to finish, yielding incremental results
        as they become available. Automatically detects whether to start a fresh
        execution or resume from an interrupted state, supporting pause/resume
        and checkpointing.

        Args:
            input_data: Input dictionary for graph execution. For new executions,
                should contain 'messages' key with initial messages. For resumed
                executions, can contain additional data to merge.
            config: Configuration dictionary containing execution settings and context.
            default_state: Initial or template AgentState for workflow execution.
            response_granularity: Level of detail in the response (LOW, PARTIAL, FULL).

        Yields:
            Message objects representing incremental results from graph execution.
            The exact type and frequency of yields depends on node implementations
            and workflow configuration.

        Raises:
            GraphRecursionError: If execution exceeds recursion limit.
            ValueError: If input_data is invalid for new execution.
            Various exceptions: Depending on node execution failures.

        Example:
            ```python
            async for chunk in handler.stream(input_data, config, state):
                print(chunk)
            ```
        """
        logger.info(
            "Starting asynchronous graph execution with %d input keys, granularity=%s",
            len(input_data) if input_data else 0,
            response_granularity,
        )
        config = config or {}
        input_data = input_data or {}

        start_time = time.time()

        # Load or initialize state
        logger.debug("Loading or creating state from input data")
        new_state = await load_or_create_state(
            input_data,
            config,
            default_state,
        )
        state: StateT = new_state  # type: ignore[assignment]
        logger.debug(
            "State loaded: interrupted=%s, current_node=%s, step=%d",
            state.is_interrupted(),
            state.execution_meta.current_node,
            state.execution_meta.step,
        )

        cfg = config.copy()
        if "user" in cfg:
            # This will be available when you are calling
            # vi pyagenity api
            del cfg["user"]

        event = EventModel.default(
            config,
            data={"state": state},
            content_type=[ContentType.STATE],
            extra={
                "is_interrupted": state.is_interrupted(),
                "current_node": state.execution_meta.current_node,
                "step": state.execution_meta.step,
                "config": cfg,
                "response_granularity": response_granularity.value,
            },
        )

        # Publish graph initialization event
        publish_event(event)

        # Check if this is a resume case
        config = await self._check_interrupted(state, input_data, config)

        # Now start Execution
        # Execute graph
        logger.debug("Beginning graph execution")
        result = self._execute_graph(state, input_data, config)
        async for chunk in result:
            yield chunk

        # Publish graph completion event
        time_taken = time.time() - start_time
        logger.info("Graph execution finished in %.2f seconds", time_taken)

        event.event_type = EventType.END
        event.metadata.update(
            {
                "time_taken": time_taken,
                "state": state.model_dump(),
                "step": state.execution_meta.step,
                "current_node": state.execution_meta.current_node,
                "is_interrupted": state.is_interrupted(),
                "total_messages": len(state.context) if state.context else 0,
            }
        )
        publish_event(event)
Attributes
edges instance-attribute
edges = edges
interrupt_after instance-attribute
interrupt_after = interrupt_after or []
interrupt_before instance-attribute
interrupt_before = interrupt_before or []
nodes instance-attribute
nodes = nodes
Functions
__init__
__init__(nodes, edges, interrupt_before=None, interrupt_after=None)
Source code in pyagenity/graph/utils/stream_handler.py
71
72
73
74
75
76
77
78
79
80
81
82
83
@inject
def __init__(
    self,
    nodes: dict[str, Node],
    edges: list[Edge],
    interrupt_before: list[str] | None = None,
    interrupt_after: list[str] | None = None,
):
    self.nodes: dict[str, Node] = nodes
    self.edges: list[Edge] = edges
    self.interrupt_before = interrupt_before or []
    self.interrupt_after = interrupt_after or []
    self._set_interrupts(interrupt_before, interrupt_after)
stream async
stream(input_data, config, default_state, response_granularity=ResponseGranularity.LOW)

Execute the graph asynchronously with streaming output.

Runs the graph workflow from start to finish, yielding incremental results as they become available. Automatically detects whether to start a fresh execution or resume from an interrupted state, supporting pause/resume and checkpointing.

Parameters:

Name Type Description Default
input_data dict[str, Any]

Input dictionary for graph execution. For new executions, should contain 'messages' key with initial messages. For resumed executions, can contain additional data to merge.

required
config dict[str, Any]

Configuration dictionary containing execution settings and context.

required
default_state StateT

Initial or template AgentState for workflow execution.

required
response_granularity ResponseGranularity

Level of detail in the response (LOW, PARTIAL, FULL).

LOW

Yields:

Type Description
AsyncGenerator[Message]

Message objects representing incremental results from graph execution.

AsyncGenerator[Message]

The exact type and frequency of yields depends on node implementations

AsyncGenerator[Message]

and workflow configuration.

Raises:

Type Description
GraphRecursionError

If execution exceeds recursion limit.

ValueError

If input_data is invalid for new execution.

Various exceptions

Depending on node execution failures.

Example
async for chunk in handler.stream(input_data, config, state):
    print(chunk)
Source code in pyagenity/graph/utils/stream_handler.py
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
async def stream(
    self,
    input_data: dict[str, Any],
    config: dict[str, Any],
    default_state: StateT,
    response_granularity: ResponseGranularity = ResponseGranularity.LOW,
) -> AsyncGenerator[Message]:
    """Execute the graph asynchronously with streaming output.

    Runs the graph workflow from start to finish, yielding incremental results
    as they become available. Automatically detects whether to start a fresh
    execution or resume from an interrupted state, supporting pause/resume
    and checkpointing.

    Args:
        input_data: Input dictionary for graph execution. For new executions,
            should contain 'messages' key with initial messages. For resumed
            executions, can contain additional data to merge.
        config: Configuration dictionary containing execution settings and context.
        default_state: Initial or template AgentState for workflow execution.
        response_granularity: Level of detail in the response (LOW, PARTIAL, FULL).

    Yields:
        Message objects representing incremental results from graph execution.
        The exact type and frequency of yields depends on node implementations
        and workflow configuration.

    Raises:
        GraphRecursionError: If execution exceeds recursion limit.
        ValueError: If input_data is invalid for new execution.
        Various exceptions: Depending on node execution failures.

    Example:
        ```python
        async for chunk in handler.stream(input_data, config, state):
            print(chunk)
        ```
    """
    logger.info(
        "Starting asynchronous graph execution with %d input keys, granularity=%s",
        len(input_data) if input_data else 0,
        response_granularity,
    )
    config = config or {}
    input_data = input_data or {}

    start_time = time.time()

    # Load or initialize state
    logger.debug("Loading or creating state from input data")
    new_state = await load_or_create_state(
        input_data,
        config,
        default_state,
    )
    state: StateT = new_state  # type: ignore[assignment]
    logger.debug(
        "State loaded: interrupted=%s, current_node=%s, step=%d",
        state.is_interrupted(),
        state.execution_meta.current_node,
        state.execution_meta.step,
    )

    cfg = config.copy()
    if "user" in cfg:
        # This will be available when you are calling
        # vi pyagenity api
        del cfg["user"]

    event = EventModel.default(
        config,
        data={"state": state},
        content_type=[ContentType.STATE],
        extra={
            "is_interrupted": state.is_interrupted(),
            "current_node": state.execution_meta.current_node,
            "step": state.execution_meta.step,
            "config": cfg,
            "response_granularity": response_granularity.value,
        },
    )

    # Publish graph initialization event
    publish_event(event)

    # Check if this is a resume case
    config = await self._check_interrupted(state, input_data, config)

    # Now start Execution
    # Execute graph
    logger.debug("Beginning graph execution")
    result = self._execute_graph(state, input_data, config)
    async for chunk in result:
        yield chunk

    # Publish graph completion event
    time_taken = time.time() - start_time
    logger.info("Graph execution finished in %.2f seconds", time_taken)

    event.event_type = EventType.END
    event.metadata.update(
        {
            "time_taken": time_taken,
            "state": state.model_dump(),
            "step": state.execution_meta.step,
            "current_node": state.execution_meta.current_node,
            "is_interrupted": state.is_interrupted(),
            "total_messages": len(state.context) if state.context else 0,
        }
    )
    publish_event(event)
Functions
stream_node_handler

Streaming node handler for PyAgenity graph workflows.

This module provides the StreamNodeHandler class, which manages the execution of graph nodes that support streaming output. It handles both regular function nodes and ToolNode instances, enabling incremental result processing, dependency injection, callback management, and event publishing.

StreamNodeHandler is a key component for enabling real-time, chunked, or incremental responses in agent workflows, supporting both synchronous and asynchronous execution patterns.

Classes:

Name Description
StreamNodeHandler

Handles streaming execution for graph nodes in PyAgenity workflows.

Attributes:

Name Type Description
logger
Attributes
logger module-attribute
logger = getLogger(__name__)
Classes
StreamNodeHandler

Bases: BaseLoggingMixin

Handles streaming execution for graph nodes in PyAgenity workflows.

StreamNodeHandler manages the execution of nodes that can produce streaming output, including both regular function nodes and ToolNode instances. It supports dependency injection, callback management, event publishing, and incremental result processing.

Attributes:

Name Type Description
name

Unique identifier for the node within the graph.

func

The function or ToolNode to execute. Determines streaming behavior.

Example
handler = StreamNodeHandler("process", process_function)
async for chunk in handler.stream(config, state):
    print(chunk)

Methods:

Name Description
__init__

Initialize a new StreamNodeHandler instance.

stream

Execute the node function with streaming output and callback support.

Source code in pyagenity/graph/utils/stream_node_handler.py
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
class StreamNodeHandler(BaseLoggingMixin):
    """Handles streaming execution for graph nodes in PyAgenity workflows.

    StreamNodeHandler manages the execution of nodes that can produce streaming output,
    including both regular function nodes and ToolNode instances. It supports dependency
    injection, callback management, event publishing, and incremental result processing.

    Attributes:
        name: Unique identifier for the node within the graph.
        func: The function or ToolNode to execute. Determines streaming behavior.

    Example:
        ```python
        handler = StreamNodeHandler("process", process_function)
        async for chunk in handler.stream(config, state):
            print(chunk)
        ```
    """

    def __init__(
        self,
        name: str,
        func: Union[Callable, "ToolNode"],
    ):
        """Initialize a new StreamNodeHandler instance.

        Args:
            name: Unique identifier for the node within the graph.
            func: The function or ToolNode to execute. Determines streaming behavior.
        """
        self.name = name
        self.func = func

    async def _handle_single_tool(
        self,
        tool_call: dict[str, Any],
        state: AgentState,
        config: dict[str, Any],
    ) -> AsyncIterable[Message]:
        function_name = tool_call.get("function", {}).get("name", "")
        function_args: dict = json.loads(tool_call.get("function", {}).get("arguments", "{}"))
        tool_call_id = tool_call.get("id", "")

        logger.info(
            "Node '%s' executing tool '%s' with %d arguments",
            self.name,
            function_name,
            len(function_args),
        )
        logger.debug("Tool arguments: %s", function_args)

        # Execute the tool function with injectable parameters
        tool_result_gen = self.func.stream(  # type: ignore
            function_name,  # type: ignore
            function_args,
            tool_call_id=tool_call_id,
            state=state,
            config=config,
        )
        logger.debug("Node '%s' tool execution completed successfully", self.name)

        async for result in tool_result_gen:
            if isinstance(result, Message):
                yield result

    async def _call_tools(
        self,
        last_message: Message,
        state: "AgentState",
        config: dict[str, Any],
    ) -> AsyncIterable[Message]:
        logger.debug("Node '%s' calling tools from message", self.name)
        if (
            hasattr(last_message, "tools_calls")
            and last_message.tools_calls
            and len(last_message.tools_calls) > 0
        ):
            # Execute tool calls
            for tool_call in last_message.tools_calls:
                result_gen = self._handle_single_tool(
                    tool_call,
                    state,
                    config,
                )
                async for result in result_gen:
                    if isinstance(result, Message):
                        yield result
        else:
            # No tool calls to execute, return available tools
            logger.exception("Node '%s': No tool calls to execute", self.name)
            raise NodeError("No tool calls to execute")

    def _prepare_input_data(
        self,
        state: "AgentState",
        config: dict[str, Any],
    ) -> dict:
        sig = inspect.signature(self.func)  # type: ignore Tool node won't come here
        input_data = {}
        default_data = {
            "state": state,
            "config": config,
        }

        # # Get injectable parameters to determine which ones to exclude from manual passing
        # # Prepare function arguments (excluding injectable parameters)
        for param_name, param in sig.parameters.items():
            # Skip *args/**kwargs
            if param.kind in (
                inspect.Parameter.VAR_POSITIONAL,
                inspect.Parameter.VAR_KEYWORD,
            ):
                continue

            # check its state, config
            if param_name in ["state", "config"]:
                input_data[param_name] = default_data[param_name]
            # Include regular function arguments
            elif param.default is inspect.Parameter.empty:
                raise TypeError(
                    f"Missing required parameter '{param_name}' for function '{self.func}'"
                )

        return input_data

    async def _call_normal_node(  # noqa: PLR0912, PLR0915
        self,
        state: "AgentState",
        config: dict[str, Any],
        callback_mgr: CallbackManager,
    ) -> AsyncIterable[dict[str, Any] | Message]:
        logger.debug("Node '%s' calling normal function", self.name)
        result: dict[str, Any] | Message = {}

        logger.debug("Node '%s' is a regular function, executing with callbacks", self.name)
        # This is a regular function - likely AI function
        # Create callback context for AI invocation
        context = CallbackContext(
            invocation_type=InvocationType.AI,
            node_name=self.name,
            function_name=getattr(self.func, "__name__", str(self.func)),
            metadata={"config": config},
        )

        # Execute before_invoke callbacks
        input_data = self._prepare_input_data(
            state,
            config,
        )

        last_message = state.context[-1] if state.context and len(state.context) > 0 else None

        event = EventModel.default(
            config,
            data={"state": state.model_dump()},
            event=Event.NODE_EXECUTION,
            content_type=[ContentType.STATE],
            node_name=self.name,
            extra={
                "node": self.name,
                "function_name": getattr(self.func, "__name__", str(self.func)),
                "last_message": last_message.model_dump() if last_message else None,
            },
        )
        publish_event(event)

        try:
            logger.debug("Node '%s' executing before_invoke callbacks", self.name)
            # Execute before_invoke callbacks
            input_data = await callback_mgr.execute_before_invoke(context, input_data)
            logger.debug("Node '%s' executing function", self.name)
            event.event_type = EventType.PROGRESS
            event.content = "Function execution started"
            publish_event(event)

            # Execute the actual function
            result = await call_sync_or_async(
                self.func,  # type: ignore
                **input_data,
            )
            logger.debug("Node '%s' function execution completed", self.name)

            logger.debug("Node '%s' executing after_invoke callbacks", self.name)
            # Execute after_invoke callbacks
            result = await callback_mgr.execute_after_invoke(context, input_data, result)

            # Now lets convert the response here only, upstream will be easy to handle
            ##############################################################################
            ################### Logics for streaming ##########################
            ##############################################################################
            """
            Check user sending command or not
            if command then we will check its streaming or not
            if streaming then we will yield from converter stream
            if not streaming then we will convert it and yield end event
            if its not command then we will check its streaming or not
            if streaming then we will yield from converter stream
            if not streaming then we will convert it and yield end event
            """
            # first check its sync and not streaming
            next_node = None
            final_result = result
            # if type of command then we will update it
            if isinstance(result, Command):
                # now check the updated
                if result.update:
                    final_result = result.update

                if result.state:
                    state = result.state
                    for msg in state.context:
                        yield msg

                next_node = result.goto

            messages = []
            if check_non_streaming(final_result):
                new_state, messages, next_node = await process_node_result(
                    final_result,
                    state,
                    messages,
                )
                event.data["state"] = new_state.model_dump()
                event.event_type = EventType.END
                event.data["messages"] = [m.model_dump() for m in messages] if messages else []
                event.data["next_node"] = next_node
                publish_event(event)
                for m in messages:
                    yield m

                yield {
                    "is_non_streaming": True,
                    "state": new_state,
                    "messages": messages,
                    "next_node": next_node,
                }
                return  # done

            # If the result is a ConverterCall with stream=True, use the converter
            if isinstance(result, ModelResponseConverter) and result.response:
                stream_gen = result.stream(
                    config,
                    node_name=self.name,
                    meta={
                        "function_name": getattr(self.func, "__name__", str(self.func)),
                    },
                )
                # this will return event_model or message
                async for item in stream_gen:
                    if isinstance(item, Message) and not item.delta:
                        messages.append(item)
                    yield item
            # Things are done, so publish event and yield final response
            event.event_type = EventType.END
            if messages:
                final_msg = messages[-1]
                event.data["message"] = final_msg.model_dump()
                # Populate simple content and structured blocks when available
                event.content = (
                    final_msg.text() if isinstance(final_msg.content, list) else final_msg.content
                )
                if isinstance(final_msg.content, list):
                    event.content_blocks = final_msg.content
            else:
                event.data["message"] = None
                event.content = ""
                event.content_blocks = None
            event.content_type = [ContentType.MESSAGE, ContentType.STATE]
            publish_event(event)
            # if user use command and its streaming in that case we need to handle next node also
            yield {
                "is_non_streaming": False,
                "messages": messages,
                "next_node": next_node,
            }

        except Exception as e:
            logger.warning(
                "Node '%s' execution failed, executing error callbacks: %s", self.name, e
            )
            # Execute error callbacks
            recovery_result = await callback_mgr.execute_on_error(context, input_data, e)

            if isinstance(recovery_result, Message):
                logger.info(
                    "Node '%s' recovered from error using callback result",
                    self.name,
                )
                # Use recovery result instead of raising the error
                event.event_type = EventType.END
                event.content = "Function execution recovered from error"
                event.data["message"] = recovery_result.model_dump()
                event.content_type = [ContentType.MESSAGE, ContentType.STATE]
                publish_event(event)

                yield recovery_result
            else:
                # Re-raise the original error
                logger.error("Node '%s' could not recover from error", self.name)
                event.event_type = EventType.ERROR
                event.content = f"Function execution failed: {e}"
                event.data["error"] = str(e)
                event.content_type = [ContentType.ERROR, ContentType.STATE]
                publish_event(event)
                raise

    async def stream(
        self,
        config: dict[str, Any],
        state: AgentState,
        callback_mgr: CallbackManager = Inject[CallbackManager],
    ) -> AsyncGenerator[dict[str, Any] | Message]:
        """Execute the node function with streaming output and callback support.

        Handles both ToolNode and regular function nodes, yielding incremental results
        as they become available. Supports dependency injection, callback management,
        and event publishing for monitoring and debugging.

        Args:
            config: Configuration dictionary containing execution context and settings.
            state: Current AgentState providing workflow context and shared state.
            callback_mgr: Callback manager for pre/post execution hook handling.

        Yields:
            Dictionary objects or Message instances representing incremental outputs
            from the node function. The exact type and frequency of yields depends on
            the node function's streaming implementation.

        Raises:
            NodeError: If node execution fails or encounters an error.

        Example:
            ```python
            async for chunk in handler.stream(config, state):
                print(chunk)
            ```
        """
        logger.info("Executing node '%s'", self.name)
        logger.debug(
            "Node '%s' execution with state context size=%d, config keys=%s",
            self.name,
            len(state.context) if state.context else 0,
            list(config.keys()) if config else [],
        )

        # In this function publishing events not required
        # If its tool node, its already handled there, from start to end
        # In this class we need to handle normal function calls only
        # We will yield events from here only for normal function calls
        # ToolNode will yield events from its own stream method

        try:
            if isinstance(self.func, ToolNode):
                logger.debug("Node '%s' is a ToolNode, executing tool calls", self.name)
                # This is tool execution - handled separately in ToolNode
                if state.context and len(state.context) > 0:
                    last_message = state.context[-1]
                    logger.debug("Node '%s' processing tool calls from last message", self.name)
                    result = self._call_tools(
                        last_message,
                        state,
                        config,
                    )
                    async for item in result:
                        yield item
                    # Check if last message has tool calls to execute
                else:
                    # No context, return available tools
                    error_msg = "No context available for tool execution"
                    logger.error("Node '%s': %s", self.name, error_msg)
                    raise NodeError(error_msg)

            else:
                result = self._call_normal_node(
                    state,
                    config,
                    callback_mgr,
                )
                async for item in result:
                    yield item

            logger.info("Node '%s' execution completed successfully", self.name)
        except Exception as e:
            # This is the final catch-all for node execution errors
            logger.exception("Node '%s' execution failed: %s", self.name, e)
            raise NodeError(f"Error in node '{self.name}': {e!s}") from e
Attributes
func instance-attribute
func = func
name instance-attribute
name = name
Functions
__init__
__init__(name, func)

Initialize a new StreamNodeHandler instance.

Parameters:

Name Type Description Default
name str

Unique identifier for the node within the graph.

required
func Union[Callable, ToolNode]

The function or ToolNode to execute. Determines streaming behavior.

required
Source code in pyagenity/graph/utils/stream_node_handler.py
62
63
64
65
66
67
68
69
70
71
72
73
74
def __init__(
    self,
    name: str,
    func: Union[Callable, "ToolNode"],
):
    """Initialize a new StreamNodeHandler instance.

    Args:
        name: Unique identifier for the node within the graph.
        func: The function or ToolNode to execute. Determines streaming behavior.
    """
    self.name = name
    self.func = func
stream async
stream(config, state, callback_mgr=Inject[CallbackManager])

Execute the node function with streaming output and callback support.

Handles both ToolNode and regular function nodes, yielding incremental results as they become available. Supports dependency injection, callback management, and event publishing for monitoring and debugging.

Parameters:

Name Type Description Default
config dict[str, Any]

Configuration dictionary containing execution context and settings.

required
state AgentState

Current AgentState providing workflow context and shared state.

required
callback_mgr CallbackManager

Callback manager for pre/post execution hook handling.

Inject[CallbackManager]

Yields:

Type Description
AsyncGenerator[dict[str, Any] | Message]

Dictionary objects or Message instances representing incremental outputs

AsyncGenerator[dict[str, Any] | Message]

from the node function. The exact type and frequency of yields depends on

AsyncGenerator[dict[str, Any] | Message]

the node function's streaming implementation.

Raises:

Type Description
NodeError

If node execution fails or encounters an error.

Example
async for chunk in handler.stream(config, state):
    print(chunk)
Source code in pyagenity/graph/utils/stream_node_handler.py
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
async def stream(
    self,
    config: dict[str, Any],
    state: AgentState,
    callback_mgr: CallbackManager = Inject[CallbackManager],
) -> AsyncGenerator[dict[str, Any] | Message]:
    """Execute the node function with streaming output and callback support.

    Handles both ToolNode and regular function nodes, yielding incremental results
    as they become available. Supports dependency injection, callback management,
    and event publishing for monitoring and debugging.

    Args:
        config: Configuration dictionary containing execution context and settings.
        state: Current AgentState providing workflow context and shared state.
        callback_mgr: Callback manager for pre/post execution hook handling.

    Yields:
        Dictionary objects or Message instances representing incremental outputs
        from the node function. The exact type and frequency of yields depends on
        the node function's streaming implementation.

    Raises:
        NodeError: If node execution fails or encounters an error.

    Example:
        ```python
        async for chunk in handler.stream(config, state):
            print(chunk)
        ```
    """
    logger.info("Executing node '%s'", self.name)
    logger.debug(
        "Node '%s' execution with state context size=%d, config keys=%s",
        self.name,
        len(state.context) if state.context else 0,
        list(config.keys()) if config else [],
    )

    # In this function publishing events not required
    # If its tool node, its already handled there, from start to end
    # In this class we need to handle normal function calls only
    # We will yield events from here only for normal function calls
    # ToolNode will yield events from its own stream method

    try:
        if isinstance(self.func, ToolNode):
            logger.debug("Node '%s' is a ToolNode, executing tool calls", self.name)
            # This is tool execution - handled separately in ToolNode
            if state.context and len(state.context) > 0:
                last_message = state.context[-1]
                logger.debug("Node '%s' processing tool calls from last message", self.name)
                result = self._call_tools(
                    last_message,
                    state,
                    config,
                )
                async for item in result:
                    yield item
                # Check if last message has tool calls to execute
            else:
                # No context, return available tools
                error_msg = "No context available for tool execution"
                logger.error("Node '%s': %s", self.name, error_msg)
                raise NodeError(error_msg)

        else:
            result = self._call_normal_node(
                state,
                config,
                callback_mgr,
            )
            async for item in result:
                yield item

        logger.info("Node '%s' execution completed successfully", self.name)
    except Exception as e:
        # This is the final catch-all for node execution errors
        logger.exception("Node '%s' execution failed: %s", self.name, e)
        raise NodeError(f"Error in node '{self.name}': {e!s}") from e
Functions
stream_utils

Streaming utility functions for PyAgenity graph workflows.

This module provides helper functions for determining whether a result from a node or tool execution should be treated as non-streaming (i.e., a complete result) or processed incrementally as a stream. These utilities are used throughout the graph execution engine to support both synchronous and streaming workflows.

Functions:

Name Description
check_non_streaming

Determine if a result should be treated as non-streaming.

Classes
Functions
check_non_streaming
check_non_streaming(result)

Determine if a result should be treated as non-streaming.

Checks whether the given result is a complete, non-streaming output (such as a list, dict, string, Message, or AgentState) or if it should be processed incrementally as a stream.

Parameters:

Name Type Description Default
result

The result object returned from a node or tool execution. Can be any type.

required

Returns:

Name Type Description
bool bool

True if the result is non-streaming and should be processed as a complete output;

bool

False if the result should be handled as a stream.

Example

check_non_streaming([Message.text_message("done")]) True check_non_streaming(Message.text_message("done")) True check_non_streaming({"choices": [...]}) True check_non_streaming("some text") True

Source code in pyagenity/graph/utils/stream_utils.py
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
def check_non_streaming(result) -> bool:
    """Determine if a result should be treated as non-streaming.

    Checks whether the given result is a complete, non-streaming output (such as a list,
    dict, string, Message, or AgentState) or if it should be processed incrementally as a stream.

    Args:
        result: The result object returned from a node or tool execution. Can be any type.

    Returns:
        bool: True if the result is non-streaming and should be processed as a complete output;
        False if the result should be handled as a stream.

    Example:
        >>> check_non_streaming([Message.text_message("done")])
        True
        >>> check_non_streaming(Message.text_message("done"))
        True
        >>> check_non_streaming({"choices": [...]})
        True
        >>> check_non_streaming("some text")
        True
    """
    if isinstance(result, list | dict | str):
        return True

    if isinstance(result, Message):
        return True

    if isinstance(result, AgentState):
        return True

    if isinstance(result, dict) and "choices" in result:
        return True

    return bool(isinstance(result, Message))
utils

Core utility functions for graph execution and state management.

This module provides essential utilities for PyAgenity graph execution, including state management, message processing, response formatting, and execution flow control. These functions handle the low-level operations that support graph workflow execution.

The utilities in this module are designed to work with PyAgenity's dependency injection system and provide consistent interfaces for common operations across different execution contexts.

Key functionality areas: - State loading, creation, and synchronization - Message processing and deduplication - Response formatting based on granularity levels - Node execution result processing - Interrupt handling and execution flow control

Functions:

Name Description
call_realtime_sync

Call the realtime state sync hook if provided.

check_and_handle_interrupt

Check for interrupts and save state if needed. Returns True if interrupted.

get_next_node

Get the next node to execute based on edges.

load_or_create_state

Load existing state from checkpointer or create new state.

parse_response

Parse and format execution response based on specified granularity level.

process_node_result

Processes the result from a node execution, updating the agent state, message list,

reload_state

Load existing state from checkpointer or create new state.

sync_data

Sync the current state and messages to the checkpointer.

Attributes:

Name Type Description
StateT
logger
Attributes
StateT module-attribute
StateT = TypeVar('StateT', bound=AgentState)
logger module-attribute
logger = getLogger(__name__)
Classes
Functions
call_realtime_sync async
call_realtime_sync(state, config, checkpointer=Inject[BaseCheckpointer])

Call the realtime state sync hook if provided.

Source code in pyagenity/graph/utils/utils.py
460
461
462
463
464
465
466
467
468
469
async def call_realtime_sync(
    state: AgentState,
    config: dict[str, Any],
    checkpointer: BaseCheckpointer = Inject[BaseCheckpointer],  # will be auto-injected
) -> None:
    """Call the realtime state sync hook if provided."""
    if checkpointer:
        logger.debug("Calling realtime state sync hook")
        # await call_sync_or_async(checkpointer.a, config, state)
        await checkpointer.aput_state_cache(config, state)
check_and_handle_interrupt async
check_and_handle_interrupt(interrupt_before, interrupt_after, current_node, interrupt_type, state, config, _sync_data)

Check for interrupts and save state if needed. Returns True if interrupted.

Source code in pyagenity/graph/utils/utils.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
async def check_and_handle_interrupt(
    interrupt_before: list[str],
    interrupt_after: list[str],
    current_node: str,
    interrupt_type: str,
    state: AgentState,
    config: dict[str, Any],
    _sync_data: Callable,
) -> bool:
    """Check for interrupts and save state if needed. Returns True if interrupted."""
    interrupt_nodes = interrupt_before if interrupt_type == "before" else interrupt_after

    if current_node in interrupt_nodes:
        status = (
            ExecutionStatus.INTERRUPTED_BEFORE
            if interrupt_type == "before"
            else ExecutionStatus.INTERRUPTED_AFTER
        )
        state.set_interrupt(
            current_node,
            f"interrupt_{interrupt_type}: {current_node}",
            status,
        )
        # Save state and interrupt
        await _sync_data(state, config, [])
        logger.debug("Node '%s' interrupted", current_node)
        return True

    logger.debug(
        "No interrupts found for node '%s', continuing execution",
        current_node,
    )
    return False
get_next_node
get_next_node(current_node, state, edges)

Get the next node to execute based on edges.

Source code in pyagenity/graph/utils/utils.py
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
def get_next_node(
    current_node: str,
    state: AgentState,
    edges: list,
) -> str:
    """Get the next node to execute based on edges."""
    # Find outgoing edges from current node
    outgoing_edges = [e for e in edges if e.from_node == current_node]

    if not outgoing_edges:
        logger.debug("No outgoing edges from node '%s', ending execution", current_node)
        return END

    # Handle conditional edges
    for edge in outgoing_edges:
        if edge.condition:
            try:
                condition_result = edge.condition(state)
                if hasattr(edge, "condition_result") and edge.condition_result is not None:
                    # Mapped conditional edge
                    if condition_result == edge.condition_result:
                        return edge.to_node
                elif isinstance(condition_result, str):
                    return condition_result
                elif condition_result:
                    return edge.to_node
            except Exception:
                logger.exception("Error evaluating condition for edge: %s", edge)
                continue

    # Return first static edge if no conditions matched
    static_edges = [e for e in outgoing_edges if not e.condition]
    if static_edges:
        return static_edges[0].to_node

    logger.debug("No valid edges found from node '%s', ending execution", current_node)
    return END
load_or_create_state async
load_or_create_state(input_data, config, old_state, checkpointer=Inject[BaseCheckpointer])

Load existing state from checkpointer or create new state.

Attempts to fetch a realtime-synced state first, then falls back to the persistent checkpointer. If no existing state is found, creates a new state from the StateGraph's prototype state and merges any incoming messages. Supports partial state update via 'state' in input_data.

Source code in pyagenity/graph/utils/utils.py
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
async def load_or_create_state[StateT: AgentState](  # noqa: PLR0912, PLR0915
    input_data: dict[str, Any],
    config: dict[str, Any],
    old_state: StateT,
    checkpointer: BaseCheckpointer = Inject[BaseCheckpointer],  # will be auto-injected
) -> StateT:
    """Load existing state from checkpointer or create new state.

    Attempts to fetch a realtime-synced state first, then falls back to
    the persistent checkpointer. If no existing state is found, creates
    a new state from the `StateGraph`'s prototype state and merges any
    incoming messages. Supports partial state update via 'state' in input_data.
    """
    logger.debug("Loading or creating state with thread_id=%s", config.get("thread_id", "default"))

    # Try to load existing state if checkpointer is available
    if checkpointer:
        logger.debug("Attempting to load existing state from checkpointer")
        # first check realtime-synced state
        existing_state: StateT | None = await checkpointer.aget_state_cache(config)
        if not existing_state:
            logger.debug("No synced state found, trying persistent checkpointer")
            # If no synced state, try to get from persistent checkpointer
            existing_state = await checkpointer.aget_state(config)

        if existing_state:
            logger.info(
                "Loaded existing state with %d context messages, current_node=%s, step=%d",
                len(existing_state.context) if existing_state.context else 0,
                existing_state.execution_meta.current_node,
                existing_state.execution_meta.step,
            )
            # Normalize legacy node names (backward compatibility)
            # Some older runs may have persisted 'start'/'end' instead of '__start__'/'__end__'
            if existing_state.execution_meta.current_node == "start":
                existing_state.execution_meta.current_node = START
                logger.debug("Normalized legacy current_node 'start' to '%s'", START)
            elif existing_state.execution_meta.current_node == "end":
                existing_state.execution_meta.current_node = END
                logger.debug("Normalized legacy current_node 'end' to '%s'", END)
            elif existing_state.execution_meta.current_node == "__start__":
                existing_state.execution_meta.current_node = START
                logger.debug("Normalized legacy current_node '__start__' to '%s'", START)
            elif existing_state.execution_meta.current_node == "__end__":
                existing_state.execution_meta.current_node = END
                logger.debug("Normalized legacy current_node '__end__' to '%s'", END)
            # Merge new messages with existing context
            new_messages = input_data.get("messages", [])
            if new_messages:
                logger.debug("Merging %d new messages with existing context", len(new_messages))
                existing_state.context = add_messages(existing_state.context, new_messages)
            # Merge partial state fields if provided
            partial_state = input_data.get("state", {})
            if partial_state and isinstance(partial_state, dict):
                logger.debug("Merging partial state with %d fields", len(partial_state))
                _update_state_fields(existing_state, partial_state)
            # Update current node if available
            if "current_node" in partial_state and partial_state["current_node"] is not None:
                existing_state.set_current_node(partial_state["current_node"])
            return existing_state
    else:
        logger.debug("No checkpointer available, will create new state")

    # Create new state by deep copying the graph's prototype state
    logger.info("Creating new state from graph prototype")
    state = copy.deepcopy(old_state)

    # Ensure core AgentState fields are properly initialized
    if hasattr(state, "context") and not isinstance(state.context, list):
        state.context = []
        logger.debug("Initialized empty context list")
    if hasattr(state, "context_summary") and state.context_summary is None:
        state.context_summary = None
        logger.debug("Initialized context_summary as None")
    if hasattr(state, "execution_meta"):
        # Create a fresh execution metadata
        state.execution_meta = ExecMeta(current_node=START)
        logger.debug("Created fresh execution metadata starting at %s", START)

    # Set thread_id in execution metadata
    thread_id = config.get("thread_id", "default")
    state.execution_meta.thread_id = thread_id
    logger.debug("Set thread_id to %s", thread_id)

    # Merge new messages with context
    new_messages = input_data.get("messages", [])
    if new_messages:
        logger.debug("Adding %d new messages to fresh state", len(new_messages))
        state.context = add_messages(state.context, new_messages)
    # Merge partial state fields if provided
    partial_state = input_data.get("state", {})
    if partial_state and isinstance(partial_state, dict):
        logger.debug("Merging partial state with %d fields", len(partial_state))
        _update_state_fields(state, partial_state)

    logger.info(
        "Created new state with %d context messages", len(state.context) if state.context else 0
    )
    if "current_node" in partial_state and partial_state["current_node"] is not None:
        # Normalize legacy values if provided in partial state
        next_node = partial_state["current_node"]
        if next_node == "__start__":
            next_node = START
        elif next_node == "__end__":
            next_node = END
        state.set_current_node(next_node)
    return state  # type: ignore[return-value]
parse_response async
parse_response(state, messages, response_granularity=ResponseGranularity.LOW)

Parse and format execution response based on specified granularity level.

Formats the final response from graph execution according to the requested granularity level, allowing clients to receive different levels of detail depending on their needs.

Parameters:

Name Type Description Default
state AgentState

The final agent state after graph execution.

required
messages list[Message]

List of messages generated during execution.

required
response_granularity ResponseGranularity

Level of detail to include in the response: - FULL: Returns complete state object and all messages - PARTIAL: Returns context, summary, and messages - LOW: Returns only the messages (default)

LOW

Returns:

Type Description
dict[str, Any]

Dictionary containing the formatted response with keys depending on

dict[str, Any]

granularity level. Always includes 'messages' key with execution results.

Example
# LOW granularity (default)
response = await parse_response(state, messages)
# Returns: {"messages": [Message(...), ...]}

# FULL granularity
response = await parse_response(state, messages, ResponseGranularity.FULL)
# Returns: {"state": AgentState(...), "messages": [Message(...), ...]}
Source code in pyagenity/graph/utils/utils.py
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
async def parse_response(
    state: AgentState,
    messages: list[Message],
    response_granularity: ResponseGranularity = ResponseGranularity.LOW,
) -> dict[str, Any]:
    """Parse and format execution response based on specified granularity level.

    Formats the final response from graph execution according to the requested
    granularity level, allowing clients to receive different levels of detail
    depending on their needs.

    Args:
        state: The final agent state after graph execution.
        messages: List of messages generated during execution.
        response_granularity: Level of detail to include in the response:
            - FULL: Returns complete state object and all messages
            - PARTIAL: Returns context, summary, and messages
            - LOW: Returns only the messages (default)

    Returns:
        Dictionary containing the formatted response with keys depending on
        granularity level. Always includes 'messages' key with execution results.

    Example:
        ```python
        # LOW granularity (default)
        response = await parse_response(state, messages)
        # Returns: {"messages": [Message(...), ...]}

        # FULL granularity
        response = await parse_response(state, messages, ResponseGranularity.FULL)
        # Returns: {"state": AgentState(...), "messages": [Message(...), ...]}
        ```
    """
    match response_granularity:
        case ResponseGranularity.FULL:
            # Return full state and messages
            return {"state": state, "messages": messages}
        case ResponseGranularity.PARTIAL:
            # Return state and summary of messages
            return {
                "context": state.context,
                "summary": state.context_summary,
                "message": messages,
            }
        case ResponseGranularity.LOW:
            # Return all messages from state context
            return {"messages": messages}

    return {"messages": messages}
process_node_result async
process_node_result(result, state, messages)

Processes the result from a node execution, updating the agent state, message list, and determining the next node.

Supports: - Handling results of type Command, AgentState, Message, list, str, dict, or other types. - Deduplicating messages by message_id. - Updating the agent state and its context with new messages. - Extracting navigation information (next node) from Command results.

Parameters:

Name Type Description Default
result Any

The output from a node execution. Can be a Command, AgentState, Message, list, str, dict, ModelResponse, or other types.

required
state StateT

The current agent state.

required
messages list[Message]

The list of messages accumulated so far.

required

Returns:

Type Description
tuple[StateT, list[Message], str | None]

tuple[StateT, list[Message], str | None]: - The updated agent state. - The updated list of messages (with new, unique messages added). - The identifier of the next node to execute, if specified; otherwise, None.

Source code in pyagenity/graph/utils/utils.py
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
async def process_node_result[StateT: AgentState](  # noqa: PLR0915
    result: Any,
    state: StateT,
    messages: list[Message],
) -> tuple[StateT, list[Message], str | None]:
    """
    Processes the result from a node execution, updating the agent state, message list,
    and determining the next node.

    Supports:
    - Handling results of type Command, AgentState, Message, list, str, dict,
            or other types.
        - Deduplicating messages by message_id.
        - Updating the agent state and its context with new messages.
        - Extracting navigation information (next node) from Command results.

    Args:
        result (Any): The output from a node execution. Can be a Command, AgentState, Message,
            list, str, dict, ModelResponse, or other types.
        state (StateT): The current agent state.
        messages (list[Message]): The list of messages accumulated so far.

    Returns:
        tuple[StateT, list[Message], str | None]:
            - The updated agent state.
            - The updated list of messages (with new, unique messages added).
            - The identifier of the next node to execute, if specified; otherwise, None.
    """
    next_node = None
    existing_ids = {msg.message_id for msg in messages}
    new_messages = []

    def add_unique_message(msg: Message) -> None:
        """Add message only if it doesn't already exist."""
        if msg.message_id not in existing_ids:
            new_messages.append(msg)
            existing_ids.add(msg.message_id)

    async def create_and_add_message(content: Any) -> Message:
        """Create message from content and add if unique."""
        if isinstance(content, Message):
            msg = content
        elif isinstance(content, ModelResponseConverter):
            msg = await content.invoke()
        elif isinstance(content, str):
            msg = Message.text_message(
                content,
                role="assistant",
            )

        else:
            err = f"""
            Unsupported content type for message: {type(content)}.
            Supported types are: AgentState, Message, ModelResponseConverter, Command, str,
            dict (OpenAI style/Native Message).
            """
            raise ValueError(err)

        add_unique_message(msg)
        return msg

    def handle_state_message(old_state: StateT, new_state: StateT) -> None:
        """Handle state messages by updating the context."""
        old_messages = {}
        if old_state.context:
            old_messages = {msg.message_id: msg for msg in old_state.context}

        if not new_state.context:
            return
        # now save all the new messages
        for msg in new_state.context:
            if msg.message_id in old_messages:
                continue
            # otherwise save it
            add_unique_message(msg)

    # Process different result types
    if isinstance(result, Command):
        # Handle state updates
        if result.update:
            if isinstance(result.update, AgentState):
                handle_state_message(state, result.update)  # type: ignore[assignment]
                state = result.update  # type: ignore[assignment]
            elif isinstance(result.update, list):
                for item in result.update:
                    await create_and_add_message(item)
            else:
                await create_and_add_message(result.update)

        # Handle navigation
        next_node = result.goto

    elif isinstance(result, AgentState):
        handle_state_message(state, result)  # type: ignore[assignment]
        state = result  # type: ignore[assignment]

    elif isinstance(result, Message):
        add_unique_message(result)

    elif isinstance(result, list):
        # Handle list of items (convert each to message)
        for item in result:
            await create_and_add_message(item)
    else:
        # Handle single items (str, dict, model_dump-capable, or other)
        await create_and_add_message(result)

    # Add new messages to the main list and state context
    if new_messages:
        messages.extend(new_messages)
        state.context = add_messages(state.context, new_messages)

    return state, messages, next_node
reload_state async
reload_state(config, old_state, checkpointer=Inject[BaseCheckpointer])

Load existing state from checkpointer or create new state.

Attempts to fetch a realtime-synced state first, then falls back to the persistent checkpointer. If no existing state is found, creates a new state from the StateGraph's prototype state and merges any incoming messages. Supports partial state update via 'state' in input_data.

Source code in pyagenity/graph/utils/utils.py
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
async def reload_state[StateT: AgentState](
    config: dict[str, Any],
    old_state: StateT,
    checkpointer: BaseCheckpointer = Inject[BaseCheckpointer],  # will be auto-injected
) -> StateT:
    """Load existing state from checkpointer or create new state.

    Attempts to fetch a realtime-synced state first, then falls back to
    the persistent checkpointer. If no existing state is found, creates
    a new state from the `StateGraph`'s prototype state and merges any
    incoming messages. Supports partial state update via 'state' in input_data.
    """
    logger.debug("Loading or creating state with thread_id=%s", config.get("thread_id", "default"))

    if not checkpointer:
        return old_state

    # first check realtime-synced state
    existing_state: AgentState | None = await checkpointer.aget_state_cache(config)
    if not existing_state:
        logger.debug("No synced state found, trying persistent checkpointer")
        # If no synced state, try to get from persistent checkpointer
        existing_state = await checkpointer.aget_state(config)

    if not existing_state:
        logger.warning("No existing state found to reload, returning old state")
        return old_state

    logger.info(
        "Loaded existing state with %d context messages, current_node=%s, step=%d",
        len(existing_state.context) if existing_state.context else 0,
        existing_state.execution_meta.current_node,
        existing_state.execution_meta.step,
    )
    # Normalize legacy node names (backward compatibility)
    # Some older runs may have persisted 'start'/'end' instead of '__start__'/'__end__'
    if existing_state.execution_meta.current_node == "start":
        existing_state.execution_meta.current_node = START
        logger.debug("Normalized legacy current_node 'start' to '%s'", START)
    elif existing_state.execution_meta.current_node == "end":
        existing_state.execution_meta.current_node = END
        logger.debug("Normalized legacy current_node 'end' to '%s'", END)
    elif existing_state.execution_meta.current_node == "__start__":
        existing_state.execution_meta.current_node = START
        logger.debug("Normalized legacy current_node '__start__' to '%s'", START)
    elif existing_state.execution_meta.current_node == "__end__":
        existing_state.execution_meta.current_node = END
        logger.debug("Normalized legacy current_node '__end__' to '%s'", END)
    return existing_state
sync_data async
sync_data(state, config, messages, trim=False, checkpointer=Inject[BaseCheckpointer], context_manager=Inject[BaseContextManager])

Sync the current state and messages to the checkpointer.

Source code in pyagenity/graph/utils/utils.py
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
async def sync_data(
    state: AgentState,
    config: dict[str, Any],
    messages: list[Message],
    trim: bool = False,
    checkpointer: BaseCheckpointer = Inject[BaseCheckpointer],  # will be auto-injected
    context_manager: BaseContextManager = Inject[BaseContextManager],  # will be auto-injected
) -> bool:
    """Sync the current state and messages to the checkpointer."""
    is_context_trimmed = False

    new_state = copy.deepcopy(state)
    # if context manager is available then utilize it
    if context_manager and trim:
        new_state = await context_manager.atrim_context(state)
        is_context_trimmed = True

    # first sync with realtime then main db
    await call_realtime_sync(state, config, checkpointer)
    logger.debug("Persisting state and %d messages to checkpointer", len(messages))

    if checkpointer:
        await checkpointer.aput_state(config, new_state)
        if messages:
            await checkpointer.aput_messages(config, messages)

    return is_context_trimmed