If you do a WHOIS on gramatr.com, you’ll find a domain registered in 2007. That’s not a typo. The domain that now hosts a context engineering platform has been continuously owned for nineteen years.
It started as a digital agency. It became something very different. But the thread connecting every iteration — every pivot, every patent, every product — is the same problem: data lives in silos, context gets lost between systems, and people spend too much time re-explaining things that should already be understood.
The tools changed. The problem never did.
2007: Gra Matr, the Agency
The original Gra Matr was a digital agency. Research, content strategy, competitive analysis, campaign deployment, creative direction — a full-service team building brand engagement and digital media strategy.
The work was good. The clients were real. And every project had the same invisible tax: the constant re-establishing of context. New team members needed onboarding. Clients had to re-explain their brand voice. Campaign history lived in email threads and slide decks that nobody could find when they needed them.
At the time, I thought that was just how work worked. It took years to recognize it as a solvable engineering problem.
The Thread Through Four Companies
After Gra Matr, I co-founded Recursive Labs. We built co-browsing technology — real-time shared browser sessions for customer support and collaboration. That work produced seven patents. The core problem we were solving: when two people look at the same screen, they still lack shared context about what they’re trying to accomplish. The technology was about bridging the gap between “I can see your screen” and “I understand your situation.”
Context, again. Different domain, same problem.
Then came Advocado, a cross-media attribution platform. If you’ve worked in advertising, you know the challenge: TV ads run, digital campaigns run, social campaigns run, and nobody can connect the dots. Multi-touch attribution tries to stitch together data from completely separate systems to answer “what actually worked?”
The fundamental obstacle wasn’t math. Multi-Touch Attribution isn’t broken because the math is wrong — it’s broken because the assumptions are. Each advertising channel collects data in its own format, its own cadence, its own silo. The attribution problem is a context fragmentation problem. Data exists everywhere. Understanding exists nowhere.
Context, again.
Then NEXT90 — a data engine for media companies. Same pattern at an industry scale: publishers sitting on massive first-party data assets they couldn’t activate because the data lived in disconnected systems with no shared context layer. The NEXT90 insights and data engine was built to unify fragmented data into actionable intelligence.
The entire NEXT90 platform was built using what would eventually become gramatr’s intelligence layer. Not in theory — in practice. The same codebase, the same memory infrastructure, the same pattern of needing AI tools to maintain context across complex, multi-month projects.
November 2022: The Catalyst
When ChatGPT launched in November 2022, I dove in the first week. Not casually. Deeply. The potential was immediately obvious. So was the limitation.
AI is powerful. AI is also fundamentally broken in one specific, maddening way: it forgets everything between sessions. Every conversation starts from zero. Every interaction requires re-establishing context that should already be understood.
Sound familiar? It was the same problem I’d been circling for fifteen years — the same problem that drove co-browsing patents, attribution platforms, and data engines. Just in a new domain.
2023: Building What Didn’t Exist Yet
In 2023, before turnkey RAG systems and vector databases were widely available, I was hand-building vector memory systems from scratch. Learning embeddings. Understanding similarity search. Building the infrastructure that companies like Mem0 would later productize.
This wasn’t academic interest. I was building the NEXT90 platform with AI agents, and every session started from zero. The agents forgot the architecture. They forgot the decisions. They forgot the preferences. I spent more time re-explaining my own codebase than building new features.
So I built memory. A knowledge graph with vector search. MCP tools for Claude Code integration. An early routing concept using an offloaded model to interpret intent. It worked. The AI stopped forgetting.
But it didn’t start learning.
2024-Early 2026: The 40,000-Token Ceiling
The memory system was good. The AI could retrieve context from previous sessions. It could reference past decisions. It could maintain project state.
But every rule, every preference, every pattern still needed to live in a massive CLAUDE.md file — the system prompt that Claude Code reads at session start. That file grew to 40,000 tokens. Every time the AI made a mistake, I added a rule. Every time it ignored a convention, I added a section.
The file grew. The performance didn’t.
Language models degrade above roughly 32,000 tokens of context. My 40,000-token system prompt was actively making the AI worse. I’d built the best filing cabinet I could — and the problem was never the filing cabinet.
March 2026: Memory Becomes Intelligence
The inflection point came in early March 2026. I discovered Daniel Miessler’s PAI (Personal AI Infrastructure) and reconnected with his Fabric project. The routing patterns in those tools showed me how to turn the memory system into something fundamentally different.
The key insight: don’t store everything and retrieve what seems relevant. Classify first. Understand the request before you try to answer it. Route context dynamically based on what’s actually needed, not what might be useful.
In one week — March 21-28 — I built the routing engine. CLAUDE.md collapsed from 40,000 tokens to 1,200. The intelligence pipeline pre-classifies every request, assembles targeted context, and delivers only what matters. Development velocity jumped 7x, verifiable in the git log.
The memory system became an intelligence pipeline. The filing cabinet became a brain.
The Arc
Here’s what I find genuinely interesting about this trajectory: the capabilities of the original 2007 digital agency — research, content strategy, competitive analysis, creative direction, project management — are now skills inside the gramatr system. Not metaphorically. Literally. The dynamic skill registry includes skills for research, for website content creation, for competitive analysis.
The agency didn’t die. It evolved.
But gramatr is much more than what the agency was. A digital agency operates in one domain. gramatr is domain-agnostic. It learns from software architecture and DevOps and legal research and security design and content strategy — whatever the user works on. The agency origin is where it came from, not what it is.
Thirty-two years in technology. Seven patents. Four companies. One persistent problem: context gets lost between systems, and people pay the tax.
gramatr is my answer. Not a memory tool — a context engineering platform. Not a product I decided to build — a product that nineteen years of hitting the same wall made inevitable.
Same domain. Different intelligence.