How it worksArchitectureUse casesPricingDocsSign in

Install

One install. Three ways to use it.

Install once. Pick the integration that fits your workflow.

Python SDK

For custom code — embed Gradata in your app

import gradata

Use Gradata programmatically inside your Python application. Log corrections, inspect rules, export brains.

  • Full API surface
  • Runs in your process
  • Works with any LLM provider
  • 1,900+ tests
Best for AI CLI tools

CLI

For Claude Code, Codex CLI, Cursor, Aider, bash scripts

gradata --help

A shell command any AI agent can call. Token-efficient, composes with Unix pipes, zero ceremony.

  • Any shell-capable AI uses it natively
  • Low token overhead (~200 vs MCP's 5K+)
  • Composable with pipes
  • No schema negotiation

MCP Server

For Claude Desktop, ChatGPT Desktop, Windsurf, Zed

gradata mcp-serve

An MCP server any MCP-capable client can connect to. Auto-discovery, standardized schema, no shell needed.

  • Auto-discovery by MCP clients
  • Works without shell access
  • Standardized tool schema
  • Good for enterprise (OAuth, audit)

For developers

Three lines to get started.

from gradata import Brain
brain = Brain("./my-brain")
brain.correct(draft="Dear Sir...", final="Hey Mike...")
Zero dependencies
Works offline
Works with any LLM provider
1400+ tests
Open source (AGPL-3.0)
Python 3.11+
~250 tokens per session (10 active rules, more activate by context)
SQLite under the hood

FAQ

Common questions

Most users see corrections drop within 10–15 sessions. The brain converges as it accumulates repeated signals.

Rules need multiple repeated signals before they stick. One bad correction stays weak and dies if unreinforced. You can also undo any rule instantly with brain.rollback().

Yes. Gradata works with any LLM: OpenAI, Anthropic, LangChain, CrewAI, local models. It sits between you and the model at the system prompt layer.

No. Gradata runs entirely on your machine by default. Zero network calls. Your corrections never leave your computer. Open source (AGPL-3.0).

Custom Instructions are static files you write and maintain manually. Gradata captures corrections dynamically and graduates them into rules that compound over time.

About 250 tokens at session start (your 10 most relevant rules). Additional rules swap in based on context. Roughly $1/month in added token cost.

Your rules are yours. Export as JSON or YAML anytime with brain.export(). No lock-in.

Yes. brain.share() and brain.absorb() let team members share graduated rules. Cloud Dashboard with team analytics coming soon.

The SDK is free and open source (AGPL-3.0). A cloud dashboard with team features, meta-rules, and weekly digests is coming soon.

Fine-tuning requires datasets, GPU time, and retraining. Gradata learns from individual corrections in real-time. Think scalpel vs cannon.