Metadata-Version: 2.4
Name: a2a-langgraph
Version: 0.1.0
Summary: A2A protocol implementation for LangGraph agents
Project-URL: Homepage, https://github.com/gutan-ai/a2a-langgraph
Project-URL: Documentation, https://github.com/gutan-ai/a2a-langgraph#readme
Project-URL: Repository, https://github.com/gutan-ai/a2a-langgraph
Project-URL: Issues, https://github.com/gutan-ai/a2a-langgraph/issues
Author-email: Hugo Romero <romerorico.hugo@gmail.com>
License: MIT
License-File: LICENSE
Keywords: a2a,agent2agent,agents,fastapi,langgraph
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries
Requires-Python: >=3.10
Requires-Dist: a2a-sdk>=1.0.2
Requires-Dist: langchain-core>=1.0.0
Requires-Dist: langgraph>=1.0.0
Requires-Dist: pydantic>=2.0.0
Provides-Extra: fastapi
Requires-Dist: a2a-sdk[http-server]>=0.3.25; extra == 'fastapi'
Requires-Dist: fastapi[standard]>=0.135.3; extra == 'fastapi'
Requires-Dist: uvicorn[standard]>=0.30.0; extra == 'fastapi'
Description-Content-Type: text/markdown

# a2a-langgraph

The opinionated way to expose a LangGraph agent as an A2A server.

[![PyPI version](https://img.shields.io/pypi/v/a2a-langgraph)](https://pypi.org/project/a2a-langgraph/)
[![Python](https://img.shields.io/pypi/pyversions/a2a-langgraph)](https://pypi.org/project/a2a-langgraph/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

---

## Why this exists

The [A2A protocol](https://google.github.io/A2A/) is quickly becoming the standard for agent interoperability. If you build with LangGraph, connecting your graph to that ecosystem requires a fair amount of boilerplate: wiring up an `AgentExecutor`, a `DefaultRequestHandler`, a `TaskStore`, an agent card, and mounting everything onto an HTTP server.

Two alternatives exist, but neither works well today.

The first is [5enxia/langgraph-a2a-server](https://github.com/5enxia/langgraph-a2a-server). It targets an outdated version of the A2A SDK and is no longer maintained.

The second is LangChain's own `AgentServer`. It handles the wiring for you but trades away control of your API. You lose the ability to define and manage your own endpoints alongside the agent, which matters in real production services. This gap is [actively discussed in the LangChain forum](https://forum.langchain.com/t/feature-request-native-support-for-a2a-protocol-remote-agents-as-sub-graphs/1521/4).

`a2a-langgraph` takes a third path: a small, opinionated adapter that mounts A2A endpoints directly onto your existing FastAPI app, leaving everything else under your control.

Two stabilization events make this the right moment to build something lasting:

- **LangGraph >= 1.0** shipped in November 2025, stabilizing the graph and state APIs.
- **a2a-sdk 1.0** released in April 2026, making the protocol ready for production use.

---

## What it does

`a2a-langgraph` wraps a compiled LangGraph in an A2A `AgentExecutor` and mounts two endpoints onto your FastAPI app:

- `GET {mount_path}/.well-known/agent-card.json` — agent metadata
- `POST {mount_path}/` — A2A JSON-RPC endpoint (`message/send`, `message/stream`, task operations)

You bring the graph. The library handles the rest.

---

## Installation

```bash
pip install a2a-langgraph[fastapi]
```

Or with uv:

```bash
uv add a2a-langgraph[fastapi]
```

The `fastapi` extra pulls in FastAPI, Uvicorn, and the A2A HTTP server components.

---

## Quickstart

```python
from fastapi import FastAPI
from langchain_openai import ChatOpenAI
from langgraph.graph import END, START, MessagesState, StateGraph

from a2a_langgraph.fastapi import add_langgraph_fastapi_endpoint

# Build your LangGraph
llm = ChatOpenAI(model="gpt-4o-mini")

def chatbot(state: MessagesState):
    return {"messages": [llm.invoke(state["messages"])]}

builder = StateGraph(MessagesState)
builder.add_node("chatbot", chatbot)
builder.add_edge(START, "chatbot")
builder.add_edge("chatbot", END)
graph = builder.compile()

# Create your FastAPI app
app = FastAPI()

# Mount the A2A endpoint
add_langgraph_fastapi_endpoint(
    app,
    graph=graph,
    mount_path="/agent",
    agent_name="My LangGraph Agent",
    agent_description="A simple chatbot exposed over A2A",
    base_url="http://localhost:8000",
)
```

Run with:

```bash
uvicorn main:app --reload
```

Your agent is now available at `http://localhost:8000/agent` and discoverable via its agent card.

---

## Configuration

`add_langgraph_fastapi_endpoint` accepts the most common options as keyword arguments: `agent_name`, `agent_description`, `version`, `base_url`, and `skills`. You can also pass a custom `task_store` or an `executor_cls` to override the default executor.

For the full list of options, see [`A2ALangGraphConfig`](src/a2a_langgraph/config.py).

---

## Custom output mapping

By default, the library reads the last `AIMessage` from the `messages` key of your graph's state. If your graph uses a different output structure, pass a custom `output_mapper`:

```python
from a2a_langgraph import LangGraphAgentExecutor
from a2a_langgraph.fastapi import add_langgraph_fastapi_endpoint

def my_output_mapper(result: dict) -> str:
    return result["my_custom_key"]

add_langgraph_fastapi_endpoint(
    app,
    graph=graph,
    mount_path="/agent",
    output_mapper=my_output_mapper,
)
```

---

## Current limitations

- **Output is read from `messages` by default.** Graphs that store their final output in a custom state key need a custom `output_mapper` (see above).
- **No token-level streaming yet.** The agent sends a single A2A message after the graph finishes. Streaming intermediate tokens is on the roadmap.

---

## Roadmap

- Token-level streaming from LangGraph to A2A events
- Declarative custom state key support, aligned with both LangGraph and A2A best practices
- Starlette adapter for non-FastAPI setups
- LangGraph persistence and checkpointer integration

---

## Contributing

Fork the repo, open an issue, and send a pull request. All contributions are welcome.

---

## License

MIT
