How it worksArchitectureUse casesPricingDocsSign in

How it works

Three steps. Zero effort.

Step 01

Correct

Just do what you already do: edit AI output.

brain.correct(
  draft="Dear Sir...",
  final="Hey..."
)
Step 02

Learn

Gradata figures out what you meant and remembers it.

C

Correction detected

"Dear Sir..." → "Hey..."

R

Rule graduated

No em dashes

Step 03

Never again

Your AI applies what it learned to every future session. Automatically.

# System prompt (auto-injected)

Apply learned user preferences before responding.

Learned rule applied:

No em dashes

Rules need repeated signals before they stick. One bad correction can't break anything.

Try it live

See for yourself. No sign-up needed.

Watch a rule graduate from scratch. Three corrections, one pattern, permanent memory.

live
Hi Sarah — I wanted to provide a quick update on the project timeline. I’m pleased to confirm that we are currently on track for Friday. Please don’t hesitate to reach out if you have any additional questions.
just say on track for friday, no em dashes
Signal 1/3

Submit the correction above to see the brain learn.

Trust

What if it learns the wrong thing?

Mistakes don't stick.

It takes multiple similar corrections before anything becomes a rule. One bad edit, one typo, one off day: the brain ignores it. Only consistent patterns stick.

It self-corrects.

If a learned rule starts making output worse, the brain detects the drop and turns it off automatically. No babysitting required.

You're always in control.

See something you don't like? Undo it. One line. The rule is gone.

brain.forget("casual tone")

FAQ

Common questions

Most users see corrections drop within 10–15 sessions. The brain converges as it accumulates repeated signals.

Rules need multiple repeated signals before they stick. One bad correction stays weak and dies if unreinforced. You can also undo any rule instantly with brain.rollback().

Yes. Gradata works with any LLM: OpenAI, Anthropic, LangChain, CrewAI, local models. It sits between you and the model at the system prompt layer.

No. Gradata runs entirely on your machine by default. Zero network calls. Your corrections never leave your computer. Open source (AGPL-3.0).

Custom Instructions are static files you write and maintain manually. Gradata captures corrections dynamically and graduates them into rules that compound over time.

About 250 tokens at session start (your 10 most relevant rules). Additional rules swap in based on context. Roughly $1/month in added token cost.

Your rules are yours. Export as JSON or YAML anytime with brain.export(). No lock-in.

Yes. brain.share() and brain.absorb() let team members share graduated rules. Cloud Dashboard with team analytics coming soon.

The SDK is free and open source (AGPL-3.0). A cloud dashboard with team features, meta-rules, and weekly digests is coming soon.

Fine-tuning requires datasets, GPU time, and retraining. Gradata learns from individual corrections in real-time. Think scalpel vs cannon.