How it worksArchitectureUse casesPricingDocsSign in

Use cases

Built for anyone tired of repeating themselves

Content Teams

"AI learns your style guide after 3 corrections. Never paste it again."

Engineering Teams

"Correct a PR comment once, Gradata enforces it on every future review."

Sales Teams

"Every rep's AI writes in their voice. Same tone, same close rate."

Recruiting

"Outreach that adapts to your writing style. Sounds human because it learned from you."

Marketing

"Train your brand voice once. Every campaign, every channel, same tone."

Customer Support

"One correction, every support agent learns. Consistent answers across the team."

Product Managers

"Specs, PRDs, updates: correct the format once, every doc follows the pattern."

Analysts & Researchers

"Teach it your analysis framework once. Every report follows your structure."

FAQ

Common questions

Most users see corrections drop within 10–15 sessions. The brain converges as it accumulates repeated signals.

Rules need multiple repeated signals before they stick. One bad correction stays weak and dies if unreinforced. You can also undo any rule instantly with brain.rollback().

Yes. Gradata works with any LLM: OpenAI, Anthropic, LangChain, CrewAI, local models. It sits between you and the model at the system prompt layer.

No. Gradata runs entirely on your machine by default. Zero network calls. Your corrections never leave your computer. Open source (AGPL-3.0).

Custom Instructions are static files you write and maintain manually. Gradata captures corrections dynamically and graduates them into rules that compound over time.

About 250 tokens at session start (your 10 most relevant rules). Additional rules swap in based on context. Roughly $1/month in added token cost.

Your rules are yours. Export as JSON or YAML anytime with brain.export(). No lock-in.

Yes. brain.share() and brain.absorb() let team members share graduated rules. Cloud Dashboard with team analytics coming soon.

The SDK is free and open source (AGPL-3.0). A cloud dashboard with team features, meta-rules, and weekly digests is coming soon.

Fine-tuning requires datasets, GPU time, and retraining. Gradata learns from individual corrections in real-time. Think scalpel vs cannon.