Metadata-Version: 2.4
Name: aceteam-aep
Version: 0.8.5
Summary: Agentic Execution Protocol™ (AEP™) - trust & safety infrastructure for AI agents
Project-URL: Homepage, https://aceteam.ai
Project-URL: Repository, https://github.com/aceteam-ai/aceteam-aep
Author-email: AceTeam AI <contact@aceteam.ai>
License-Expression: Apache-2.0
Keywords: aep,agents,ai,cost-tracking,llm
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.12
Requires-Dist: anthropic<1.0.0,>=0.45.0
Requires-Dist: anyio>=4.12
Requires-Dist: google-genai<2.0.0,>=1.0.0
Requires-Dist: httpx>=0.28.0
Requires-Dist: openai<2.0.0,>=1.65.0
Requires-Dist: overrides>=7.7.0
Requires-Dist: pydantic>=2.11.0
Provides-Extra: all
Requires-Dist: fastmcp>=2.0; extra == 'all'
Requires-Dist: jinja2>=3.1; extra == 'all'
Requires-Dist: lancedb>=0.30; extra == 'all'
Requires-Dist: ollama<1.0.0,>=0.4.0; extra == 'all'
Requires-Dist: openai<2.0.0,>=1.65.0; extra == 'all'
Requires-Dist: programasweights>=0.4.2; extra == 'all'
Requires-Dist: pyyaml>=6.0; extra == 'all'
Requires-Dist: redis[hiredis]>=5.0; extra == 'all'
Requires-Dist: rich>=13.0; extra == 'all'
Requires-Dist: starlette>=0.38; extra == 'all'
Requires-Dist: torch>=2.0; extra == 'all'
Requires-Dist: transformers>=4.40; extra == 'all'
Requires-Dist: uvicorn>=0.30; extra == 'all'
Provides-Extra: dashboard
Requires-Dist: jinja2>=3.1; extra == 'dashboard'
Requires-Dist: starlette>=0.38; extra == 'dashboard'
Requires-Dist: uvicorn>=0.30; extra == 'dashboard'
Provides-Extra: dev
Requires-Dist: fastmcp>=2.0; extra == 'dev'
Requires-Dist: httpx>=0.28; extra == 'dev'
Requires-Dist: jinja2>=3.1; extra == 'dev'
Requires-Dist: lancedb>=0.30; extra == 'dev'
Requires-Dist: ollama<1.0.0,>=0.4.0; extra == 'dev'
Requires-Dist: openai<2.0.0,>=1.65.0; extra == 'dev'
Requires-Dist: programasweights>=0.4.2; extra == 'dev'
Requires-Dist: pyright>=1.1; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.24; extra == 'dev'
Requires-Dist: pytest>=8.0; extra == 'dev'
Requires-Dist: pyyaml>=6.0; extra == 'dev'
Requires-Dist: redis[hiredis]>=5.0; extra == 'dev'
Requires-Dist: rich>=13.0; extra == 'dev'
Requires-Dist: ruff>=0.11; extra == 'dev'
Requires-Dist: starlette>=0.38; extra == 'dev'
Requires-Dist: torch>=2.0; extra == 'dev'
Requires-Dist: transformers>=4.40; extra == 'dev'
Requires-Dist: uvicorn>=0.30; extra == 'dev'
Provides-Extra: events
Requires-Dist: redis[hiredis]>=5.0; extra == 'events'
Provides-Extra: feedback
Requires-Dist: lancedb>=0.30; extra == 'feedback'
Provides-Extra: mcp
Requires-Dist: fastmcp>=2.0; extra == 'mcp'
Provides-Extra: ollama
Requires-Dist: ollama<1.0.0,>=0.4.0; extra == 'ollama'
Provides-Extra: proxy
Requires-Dist: jinja2>=3.1; extra == 'proxy'
Requires-Dist: programasweights>=0.4.2; extra == 'proxy'
Requires-Dist: starlette>=0.38; extra == 'proxy'
Requires-Dist: uvicorn>=0.30; extra == 'proxy'
Provides-Extra: safety
Requires-Dist: programasweights>=0.4.2; extra == 'safety'
Requires-Dist: torch>=2.0; extra == 'safety'
Requires-Dist: transformers>=4.40; extra == 'safety'
Provides-Extra: safety-lite
Requires-Dist: programasweights>=0.4.2; extra == 'safety-lite'
Provides-Extra: top
Requires-Dist: rich>=13.0; extra == 'top'
Provides-Extra: xai
Requires-Dist: openai<2.0.0,>=1.65.0; extra == 'xai'
Provides-Extra: yaml
Requires-Dist: pyyaml>=6.0; extra == 'yaml'
Description-Content-Type: text/markdown

# aceteam-aep — SafeClaw Gateway

[![PyPI](https://img.shields.io/pypi/v/aceteam-aep)](https://pypi.org/project/aceteam-aep/)
[![AEP Safe](https://img.shields.io/badge/AEP-Safe-brightgreen)](https://github.com/aceteam-ai/aceteam-aep)

AceTeam™ trust & safety infrastructure for AI agents. The Agentic Execution Protocol™ (AEP™) adds cost tracking, safety detection, and enforcement to any LLM-powered tool — **zero code changes required.**

The gateway runs a single process on one port with three interfaces:

| Path | What it does |
|------|-------------|
| `/v1/*` | OpenAI-compatible reverse proxy with safety enforcement |
| `/dashboard/` | Dashboard — cost, signals, policy controls, setup wizard |
| `/mcp/` | MCP tools for Claude Code and any MCP client |

## Installation

```bash
pip install aceteam-aep[all]               # Everything (recommended)
pip install aceteam-aep[safety,proxy]      # Safety detectors + proxy
pip install aceteam-aep                    # Core only (cost tracking + regex safety)
```

## Quick Start

```bash
# Install and start the gateway
pip install aceteam-aep[all]
aceteam-aep proxy
```

The gateway prints three URLs on startup:

```
  SafeClaw Gateway
  ───────────────────────────────────
  LLM Proxy:  http://localhost:8899/v1
  Dashboard:  http://localhost:8899/dashboard/
  MCP:        http://localhost:8899/mcp/
```

Open the dashboard — a **setup wizard** appears on first visit and walks you through pointing your agent at the proxy or configuring Claude Code.

**Point an agent at the gateway:**

```bash
export OPENAI_BASE_URL=http://localhost:8899/v1
export OPENAI_API_KEY=sk-your-key
openclaw run "analyze these financial statements"
```

Open **http://localhost:8899/dashboard/** — every LLM call appears in real-time with cost, safety signals, and enforcement decisions.

The proxy intercepts **both directions**:
- **Incoming requests** — blocks dangerous prompts before they reach the API
- **Outgoing responses** — blocks PII, toxic content, and cost anomalies before the agent sees them

Works with OpenClaw, LangChain, CrewAI, curl, or any tool that calls the OpenAI API.

## What the Proxy Sees

The proxy is a reverse proxy (man-in-the-middle by design). It reads the full request AND full response. It can block in either direction.

```
Your Agent
    │
    ├─── REQUEST ──────────────────────────────┐
    │    messages: [user prompt, tool results]  │
    │                                           ▼
    │                                    ┌─────────────┐
    │                                    │  AEP Proxy   │
    │                                    │              │
    │                                    │  ✓ Input     │──── if dangerous ──→ BLOCK (never reaches API)
    │                                    │    text      │
    │                                    │              │──── if safe ──→ forward to OpenAI
    │                                    │              │
    │                                    │  ✓ Output    │──── if PII/toxic ──→ BLOCK (agent never sees it)
    │                                    │    text      │
    │                                    │              │──── if safe ──→ return to agent
    │                                    │  ✓ Cost      │
    │                                    │  ✓ Tool calls│
    │                                    └─────────────┘
    │                                           │
    ◄─── RESPONSE ─────────────────────────────┘
         assistant message, token usage
```

| Data | Proxy Sees It? | Details |
|------|:--------------:|---------|
| User messages (input text) | **Yes** | Full message array from request body |
| LLM response (output text) | **Yes** | Full response including all choices |
| Tool call requests | **Yes** | What functions the LLM asks to call |
| Tool call results | **Yes** | Included in next request's messages |
| Token usage + cost | **Yes** | From response usage field |
| **Agent actions between calls** | **No** | File writes, code execution, browser actions happen inside the agent, not via the LLM API |
| **Application context** | **No** | Who is calling, data classification — unless sent via `X-AEP-*` headers |

**The proxy sees every word going to and from the LLM.** It cannot see what the agent does *between* LLM calls. For that, use the SDK (Layer 2).

## Two Layers: Proxy + SDK

Think **WireGuard + Tailscale**. WireGuard is a minimal wire protocol. Tailscale adds identity and management on top. Same here:

**Layer 1 — AEP Proxy (free, zero code changes)**
- Sees all LLM traffic (input, output, tool calls, cost)
- Runs safety detectors, enforces PASS/FLAG/BLOCK
- Dashboard at `/dashboard/`
- Works with any language, any framework

**Layer 2 — AEP SDK (application-level context)**
- Adds identity: `X-AEP-Entity: org:acme`
- Adds governance: `X-AEP-Classification: confidential`
- Adds provenance: citation chains, source tracking
- Via HTTP headers through the proxy, or via Python `wrap()`

Layer 1 gets developers in the door. Layer 2 is what enterprises need for compliance.

## Python SDK — Wrap Your Existing Client

```python
import openai
from aceteam_aep import wrap

client = wrap(openai.OpenAI())

# Use exactly as before — AEP intercepts transparently
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello"}],
)

# AEP tracks everything
print(client.aep.cost_usd)            # $0.000150
print(client.aep.enforcement.action)   # "pass"
print(client.aep.safety_signals)       # []
client.aep.print_summary()             # Colored CLI output
```

Works with **OpenAI**, **Anthropic**, and any OpenAI-compatible client. Sync and async.

```python
import anthropic
from aceteam_aep import wrap

client = wrap(anthropic.Anthropic())
# Same API — client.aep.cost_usd, client.aep.safety_signals, etc.
```

## Safety Signals

Every LLM call is evaluated by pluggable safety detectors:

| Detector | What It Catches | Model |
|----------|----------------|-------|
| **PII** | SSN, email, phone, credit cards in input AND output | `iiiorg/piiranha-v1-detect-personal-information` (~110M) |
| **Content Safety** | Toxic, harmful, or unsafe content | `s-nlp/roberta_toxicity_classifier` (~125M) |
| **Agent Threat** | Port scans, subprocess execution, reverse shells, credential access, destructive commands | Regex patterns (11 patterns) |
| **Cost Anomaly** | Spend spikes >5x session average | Statistical (no model) |

Models lazy-load on first use, run on CPU. PII falls back to regex if `transformers` not installed.

### Pre-flight Blocking

`wrap()` runs detectors on the input **before** making the API call. If a detector returns a HIGH severity signal that the enforcement policy would block, the request never reaches the LLM. Cost: $0.

```python
from aceteam_aep import wrap, AepPreflightBlock

client = wrap(openai.OpenAI())
try:
    response = client.chat.completions.create(...)
except AepPreflightBlock as e:
    print(f"Blocked before API call: {e}")
    # e.decision.reason has the details
```

### Configurable Enforcement Policy

```python
client = wrap(openai.OpenAI(), policy={
    "default_action": "flag",
    "detectors": {
        "pii": {"action": "block", "threshold": 0.8},
        "agent_threat": {"action": "block"},
        "cost_anomaly": {"action": "pass", "multiplier": 10},
    },
})
```

Or from a YAML file: `wrap(client, policy="aep-policy.yaml")`

### Enforcement: PASS / FLAG / BLOCK

Every call produces an enforcement decision based on signal severity:

- **PASS** — No signals or low severity. Safe to proceed.
- **FLAG** — Medium severity. Route to human review.
- **BLOCK** — High severity (PII, toxic content). Prevent delivery.

```python
client = wrap(openai.OpenAI())
response = client.chat.completions.create(...)

match client.aep.enforcement.action:
    case "pass":
        return response
    case "flag":
        queue_for_review(response)
    case "block":
        return reject(client.aep.enforcement.reason)
```

### Custom Detectors

```python
from aceteam_aep import wrap
from aceteam_aep.safety.base import SafetySignal

class MyDetector:
    name = "my_detector"

    def check(self, *, input_text, output_text, call_id, **kwargs):
        if "secret" in output_text.lower():
            return [SafetySignal(
                signal_type="data_leak",
                severity="high",
                call_id=call_id,
                detail="Potential secret in output",
            )]
        return []

client = wrap(openai.OpenAI(), detectors=[MyDetector()])
```

## Governance Headers

Inject governance context via HTTP headers (any language, any framework):

```bash
curl http://localhost:8080/v1/chat/completions \
  -H "X-AEP-Entity: org:acme-corp" \
  -H "X-AEP-Classification: confidential" \
  -H "X-AEP-Consent: gdpr=granted,training=no" \
  -H "X-AEP-Budget: 5.00" \
  -H "X-AEP-Trace-ID: trace-abc123"
```

The proxy parses these headers, strips them before forwarding to the LLM (governance context never leaks to the provider), and includes classification and trace ID in the response headers.

## Docker Sidecar

For containerized agents (NanoClaw, CrewAI, DeerFlow, OpenClaw, NemoClaw):

```yaml
services:
  aep-proxy:
    image: ghcr.io/aceteam-ai/aep-proxy:latest
    ports: ["8899:8899"]
    environment:
      OPENAI_API_KEY: ${OPENAI_API_KEY}
  agent:
    image: your-agent:latest
    environment:
      OPENAI_BASE_URL: http://aep-proxy:8899/v1
```

One env var. Zero code changes. The agent doesn't know AEP exists.

**Tested with NVIDIA NemoClaw/OpenShell:** Agent threats (port scanning, subprocess execution) blocked at the proxy before reaching the LLM. Normal calls pass through with receipts. See [aep-quickstart](https://github.com/aceteam-ai/aep-quickstart) for the full NemoClaw demo.

## Claude Code Integration

Add the gateway as an MCP server in your Claude Code config:

```json
{
  "mcpServers": {
    "aceteam": {
      "type": "streamable-http",
      "url": "http://localhost:8899/mcp/"
    }
  }
}
```

This gives Claude four tools: `check_safety`, `get_safety_status`, `set_safety_policy`, and `get_cost_summary`. All tools share live state with the proxy — safety checks via MCP appear in the dashboard and affect traffic enforcement.

See [docs/engineering/mcp-integration.md](docs/engineering/mcp-integration.md) for full tool reference.

## Dashboard

Two views — toggle between Developer and Executive:

**Developer:** Individual calls, safety signals, cost per call, governance context, call timeline.

**Executive:** Enforcement coverage %, threats blocked, compliance status (PII/threats/toxicity/anomalies), safety breakdown, cost attribution by entity.

**Policy controls:** Per-detector checkboxes and per-category Trust Engine toggles — adjust enforcement without restarting. The master safety toggle in the header disables all detectors instantly.

**Setup wizard:** Shows on first visit (zero calls). Guides through API key configuration and agent setup — provides the `OPENAI_BASE_URL` export command and Claude Code MCP config to copy.

```python
client.aep.serve_dashboard()  # http://localhost:8899
```

Dark-themed local web UI. Auto-refreshes every 2 seconds.

## CLI Output

```python
client.aep.print_summary()
```

```
──────────────────────────────────────────────────
  AEP Session Summary
──────────────────────────────────────────────────
  Calls:  5
  Cost:   $0.004200
  Safety: PASS
──────────────────────────────────────────────────
```

## Agent Loop (Advanced)

For building agents from scratch with full AEP compliance:

```python
from aceteam_aep import create_client, run_agent_loop, ChatMessage, tool

client = create_client("gpt-4o", api_key="sk-...")

@tool
def search(query: str) -> str:
    """Search for information."""
    return f"Results for: {query}"

result = await run_agent_loop(
    client,
    [ChatMessage(role="user", content="Search for AEP protocol")],
    tools=[search],
    system_prompt="You are a helpful assistant.",
)
```

## Workshop Guide

Step-by-step setup in 5 minutes — from install to safety signals firing:

**[docs/workshop-guide.md](docs/workshop-guide.md)**

Covers: proxy setup, routing agents (Python/OpenClaw/curl), triggering safety signals, governance headers, custom detectors. Works for workshops, onboarding, or self-guided evaluation.

## Providers

- **OpenAI** (GPT-4o, GPT-5, o1, o3)
- **Anthropic** (Claude Opus, Sonnet, Haiku)
- **Google** (Gemini 2.5, 3.0)
- **xAI** (Grok)
- **Ollama** (local models)
- **OpenAI-compatible** (SambaNova, TheAgentic, DeepSeek)

## Safety Badge

Add this badge to your repo's README to show it uses AEP safety enforcement:

```markdown
[![AEP Safe](https://img.shields.io/badge/AEP-Safe-brightgreen)](https://github.com/aceteam-ai/aceteam-aep)
```

## Trademarks

"Agentic Execution Protocol," "AEP," and "AceTeam" are trademarks of AceTeam. The software is licensed under Apache 2.0. The trademark is not included in the license grant — you may not use these names to endorse or promote derivative works without written permission.
