How it worksArchitectureUse casesPricingDocsSign in

procedural memory for ai agents

Your AI keeps forgetting your

Every correction your AI receives becomes a behavioral rule that compounds over time. Stop repeating. Start learning.

Start Free Trial

Gradata Cloud

Dashboard, sync, meta-rules · 7 days free

Python SDK

Raw engine for developers

Connect MCP Server

Any AI

Plug and play, no code

Start Free Trial

Full Experience

Meta-rules + Dashboard, 7 days free

View on GitHub

Open source · AGPL-3.0 · Python 3.11+

Live Gradata Brains

0
Active Brains
0
Rules Graduated
0
Lessons Learned
0
Meta-Rules Emerged
0
Corrections Processed

The Problem

Sound familiar?

"

It's like teaching someone who has permanent amnesia.

"

The frustration isn't the mistake. It's the repetition.

"

I'm paying a 'tone tax' with my time on every project.

95% of AI users paste the same instructions into every new chat.

It works 60% of the time.

The shift

Stop editing. Start directing.

Before: daily correction time30-90 min
After: AI gets it right first try92%
83%fewer corrections by session 15
"Your AI is finally doing what you want it to do."

Results

We can prove it works.

70%won blind tests vs hand-written rules
3,000independent blind comparisons
−15%fewer corrections each round — it compounds

Independent evaluation, 100+ AI/ML professionals

Corrections per session

HighLowSession 1Session 15
100%preferred the learned output in blind tests
<100mslatency overhead per request
0external dependencies — runs locally

Independent evaluation, 100+ AI/ML professionals

Ready to stop repeating yourself?

Free to start. BYO key. No credit card.

FAQ

Common questions

Most users see corrections drop within 10–15 sessions. The brain converges as it accumulates repeated signals.

Rules need multiple repeated signals before they stick. One bad correction stays weak and dies if unreinforced. You can also undo any rule instantly with brain.rollback().

Yes. Gradata works with any LLM: OpenAI, Anthropic, LangChain, CrewAI, local models. It sits between you and the model at the system prompt layer.

No. Gradata runs entirely on your machine by default. Zero network calls. Your corrections never leave your computer. Open source (AGPL-3.0).

Custom Instructions are static files you write and maintain manually. Gradata captures corrections dynamically and graduates them into rules that compound over time.

About 250 tokens at session start (your 10 most relevant rules). Additional rules swap in based on context. Roughly $1/month in added token cost.

Your rules are yours. Export as JSON or YAML anytime with brain.export(). No lock-in.

Yes. brain.share() and brain.absorb() let team members share graduated rules. Cloud Dashboard with team analytics coming soon.

The SDK is free and open source (AGPL-3.0). A cloud dashboard with team features, meta-rules, and weekly digests is coming soon.

Fine-tuning requires datasets, GPU time, and retraining. Gradata learns from individual corrections in real-time. Think scalpel vs cannon.