How it worksArchitectureUse casesPricingDocsSign in

Architecture

Advanced Agentic AI

See How It Works

4

Memory Architecture

In-context, short-term, episodic, and semantic.

Agents forget even with four memory layers. Gradata adds procedural memory — the missing fifth layer.

In-Context
Short-Term
Episodic
Semantic
Click to explore ↓
Gradata

The Missing Layer

Every memory layer above exists in modern AI stacks. Procedural memory is the missing one. It's the layer that learns behavioral rules from your corrections. Here's how Gradata fills it:

Your App

Gradata

LLM

User Reviews

Correction

Rule Graduates

Honest Comparison

Gradata is not for everyone.

It's for people who want AI that learns, not just remembers.

Do Nothing
System Prompts
Memory Layers
Fine-Tuning
Gradata
The approach
Start over every conversation
Hand-write instructions for the AI
AI remembers facts about you
Retrain the entire model
AI learns from your corrections
What you do
Repeat yourself. Every time.
Write and update config files
Tell it what to remember
Build datasets, hire an ML team
Just correct it when it's wrong
Does it get smarter?
No
No
Remembers more, doesn't improve
Yes, but not per-user
Yes, automatically
Can you prove it?
No
No
No
Training metrics only
Yes, measurable improvement
Cost
Your time, every session
Your time, occasionally
API costs
$1K to $100K+
Free

Train your context window.

Every correction becomes a rule. Every rule gets injected before the AI reads your next message. Three corrections in, the AI starts every conversation already knowing how you think.

System prompts work. Until you grow.

Teams change. Preferences shift. Rules go stale. Gradata adapts so you don't have to rewrite everything from scratch.

Fine-tuning is a cannon.

Gradata is a scalpel. One requires a team, a dataset, and a runway. The other takes three corrections.

FAQ

Common questions

Most users see corrections drop within 10–15 sessions. The brain converges as it accumulates repeated signals.

Rules need multiple repeated signals before they stick. One bad correction stays weak and dies if unreinforced. You can also undo any rule instantly with brain.rollback().

Yes. Gradata works with any LLM: OpenAI, Anthropic, LangChain, CrewAI, local models. It sits between you and the model at the system prompt layer.

No. Gradata runs entirely on your machine by default. Zero network calls. Your corrections never leave your computer. Open source (AGPL-3.0).

Custom Instructions are static files you write and maintain manually. Gradata captures corrections dynamically and graduates them into rules that compound over time.

About 250 tokens at session start (your 10 most relevant rules). Additional rules swap in based on context. Roughly $1/month in added token cost.

Your rules are yours. Export as JSON or YAML anytime with brain.export(). No lock-in.

Yes. brain.share() and brain.absorb() let team members share graduated rules. Cloud Dashboard with team analytics coming soon.

The SDK is free and open source (AGPL-3.0). A cloud dashboard with team features, meta-rules, and weekly digests is coming soon.

Fine-tuning requires datasets, GPU time, and retraining. Gradata learns from individual corrections in real-time. Think scalpel vs cannon.