The individual problem is solved

Andrej Karpathy recently published his LLM Knowledge Base architecture, and it's worth paying attention to. Not because of any single technique, but because of the framing.

He describes a three-layer system. Raw sources at the bottom: articles, papers, notes, documents. A compiled wiki in the middle: structured Markdown pages that the LLM generates, cross-references, and maintains. A schema layer on top: instructions that tell the LLM how to organize, update, and maintain everything.

The analogy is a compiler. Human knowledge is the source code. The LLM compiles it into a structured, queryable, interlinked knowledge base. The human curates and directs. The LLM does the bookkeeping.

What makes this significant is that it's stateful. Unlike RAG, which rediscovers information from raw chunks every time you ask a question, this architecture builds a persistent artifact. Knowledge is compiled once, maintained continuously, and compounds over time. The LLM performs maintenance passes to find contradictions, missing links, and stale claims. The knowledge base gets better the longer you use it.

For individuals, this is a solved problem. If you're a researcher, a writer, an engineer who works with a large body of knowledge, this architecture works. Build a personal knowledge base. Let the LLM maintain it. Your tools get smarter every day.

But here's what Karpathy's architecture doesn't address: what happens when the knowledge isn't personal?

The organizational gap

Every organization I talk to is somewhere in the same place. Individuals are using AI tools. Some are dramatically more productive. A few are doing genuinely creative things with coding assistants, writing tools, analysis workflows. The tools work.

But the organization isn't learning from any of it.

Each person builds their own context. Their own prompts. Their own workflows. A product manager develops a spec process that works beautifully for them. An engineer creates coding standards that make their AI assistant produce better code. A designer builds review templates that actually catch the right issues. All of this is valuable. None of it is shared.

I wrote recently about how AI context is becoming organizational IP. The teams getting real value from AI aren't the ones with the best models. They're the ones building structured, shareable context that encodes how their organization actually works. That argument stands. But it raises the obvious follow-up question: how do you actually get there?

That's the problem I've been working on. Not in theory, but in practice. Building the systems, the structures, and the habits that move an organization from "individuals use AI" to "AI is part of how we operate." What follows is the framework I've developed through that work.

Where most organizations actually are

Before talking about where to go, it helps to be honest about where you are. I use a maturity model adapted from automation thinking in other industries. It's not original, but it's useful because it gives teams a common language for a conversation that usually lacks one.

Level 0: Manual. No AI in workflows. The organization may have discussed it, but nothing is in use. This is increasingly rare in 2026, but some teams are still here, usually due to compliance concerns or leadership hesitancy.

Level 1: Assisted. Individuals are using AI tools on their own. Cursor, Claude, Copilot, ChatGPT. Usage is ad hoc. Each person figures out what works for them. There's no organizational structure around it. This is where most companies are today.

Level 2: Augmented. Teams start sharing context. Shared skills, coding standards, prompt templates, process documents that multiple people use. The AI tools start behaving consistently within a team because the context is shared, not personal. This is where the transition from individual productivity to team capability begins.

Level 3: Orchestrated. The organization has a structured knowledge architecture. Context is governed: owned, versioned, reviewed, maintained. There are clear layers, long-term organizational context, medium-term team context, immediate project context. AI tools draw from this structure and produce consistent, high-quality results across the organization. New hires get the benefit of institutional knowledge from day one.

Level 4: Autonomous. AI-driven workflows operate with human oversight rather than human initiation. The organization defines the outcomes it wants, and AI systems execute the work, escalating when they hit boundaries. Humans review, approve, and refine rather than doing the routine work themselves.

Level 5: Self-optimizing. The system learns from its own performance. Processes improve automatically based on outcomes. The knowledge architecture evolves based on what works and what doesn't. This is the theoretical endpoint, and very few organizations are here for anything beyond narrow operational workflows.

"Most organizations plateau at Level 1. The tools are in use, but the organization isn't learning from itself."

The critical observation is that most organizations plateau at Level 1. They buy the tools. People use them. Some people get great results. But the gap between Level 1 and Level 2 is where organizational AI adoption actually happens, and most teams never cross it intentionally. They wait for it to happen organically, and it doesn't.

The framework below focuses on the path from Level 1 to Level 3. That's where the real transformation happens, and it's where pragmatic effort has the highest return.

The migration framework

The most important thing I've learned about AI adoption is that it doesn't work as a sequential rollout. You can't finish "process encoding" before starting "knowledge architecture" before starting "governance." These are parallel tracks that reinforce each other. You advance on all of them simultaneously, and the progress compounds.

Track 1: Process encoding

This is where most teams should start, because it delivers immediate value and teaches the organization how to think about AI context.

Process encoding means capturing how your organization actually works in a form that AI tools can traverse. Not how the wiki says it works. Not how onboarding says it works. How it actually works, right now, when your best people are doing their best work.

Concretely, this means creating skills, rules, and context documents. A product spec skill that encodes your actual spec process, the questions that get asked, the criteria that matter, the tradeoffs that are acceptable. An engineering standards document that captures how your team makes architecture decisions. A review process that encodes what "good" looks like for your organization.

The key insight is that this isn't documentation. Documentation describes a process for humans to read and interpret. Process encoding creates a structure that an agent can follow. The difference matters. Documentation says "consider performance implications." An encoded process says "evaluate the query plan for any new database access pattern, flag anything that requires a full table scan, and check whether an index exists for the access pattern."

Start with one process. Pick the one where inconsistency costs you the most. Encode it. Share it with the team. Iterate. Then pick the next one.

Track 2: Knowledge architecture

Process encoding tells agents how to do things. Knowledge architecture tells them what things exist and how they relate to each other.

This is the organizational version of Karpathy's compiler. Raw organizational knowledge, product decisions, architecture choices, customer insights, competitive positioning, roadmap context, gets compiled into a structured, navigable knowledge base. Not a flat wiki. A graph. Topics connect to other topics. Decisions connect to the evidence that supported them. Features connect to the customer problems they solve.

The structure matters more than the volume. A small, well-structured knowledge base with clear relationships between twenty core topics will outperform a massive unstructured dump of every document the company has ever produced. Agents need to traverse relationships. They need to know that a product positioning decision depends on three pieces of competitive evidence and affects four downstream feature specs. That traversal requires structure.

In practice, this means defining your organizational ontology. What are the core topics? How do they relate? What's authoritative? When a decision changes at the top, what downstream documents need to be updated? This sounds heavy, but it doesn't need to start that way. Start with the topics your team references most often. Map the relationships between them. Build from there.

Track 3: Governance

This is where most bottom-up AI adoption efforts fail. Without governance, context degrades. Skills go stale. Process documents drift from reality. The knowledge base becomes a liability instead of an asset.

Governance for AI context follows the same principles as data governance, because that's what it is. Context documents are data assets. They need ownership, versioning, review cadence, and quality standards.

The layered model I described in the context-as-IP article applies directly here. Long-term context, organizational mission, values, compliance requirements, is owned by leadership and changes slowly. Medium-term context, team processes, architecture standards, design principles, is owned by team leads and reviewed quarterly. Immediate context, project briefs, sprint goals, current customer issues, is managed by individuals and changes constantly.

Each layer has different ownership, different update cadence, and different rules about who can modify it. Git gives you the versioning and review layer for free. Context documents live in repositories. Changes go through pull requests. History is preserved. When something breaks, you can see what changed.

The practical minimum is: every context document has an owner, every owner reviews their documents on a defined cadence, and stale documents are flagged automatically. That's enough to start. You can add sophistication as the system matures.

Track 4: Progressive automation

With encoded processes, structured knowledge, and governance in place, you can start automating workflows with confidence. Without those foundations, automation is just moving faster in an unknown direction.

Progressive automation means starting with the workflows that are highest-value and lowest-risk. Code review assistance before autonomous code generation. Spec drafting before autonomous product decisions. Report generation before autonomous analysis.

The pattern is consistent: start with AI augmenting human judgment, not replacing it. Let the human review, correct, and refine. Use those corrections to improve the context and the process encoding. Each correction makes the next automated run better. Over time, the human review shifts from "checking everything" to "reviewing exceptions." That's the natural path from Level 2 to Level 3, and eventually to Level 4.

Why this compounds

The reason this framework matters, and the reason it's worth the investment in structure, is compounding.

At Level 1, every AI interaction starts from zero. The person provides context, the AI responds, the context is lost. The next person starts over. The next project starts over. Every interaction is independent. There's no organizational memory.

At Level 3, every AI interaction builds on everything that came before. The agent draws on encoded processes, structured knowledge, and governed context. When someone improves a process, every subsequent interaction benefits. When someone adds a piece of knowledge, every team that touches that topic benefits. When someone corrects an error, the correction propagates.

"The difference between Level 1 and Level 3 isn't that the AI tools are better. It's that the organization is feeding them better. And that advantage compounds every single day."

This is why organizations that invest in AI adoption infrastructure early will be very difficult to catch. The value isn't in the tools. Everyone has access to the same models. The value is in the structured organizational knowledge that makes those tools perform at a level that competitors, who are still at Level 1, cannot match.

This is also why the "wait and see" approach to AI adoption is more expensive than it appears. You're not just delaying productivity gains. You're foregoing compounding. Every month you wait is a month of organizational learning that doesn't happen.

What to do this week

If this resonates, here's where to start. Not next quarter. This week.

Audit your current context. Find out what context your team members have already created for their AI tools. Skills, rules, prompt templates, process documents. You'll probably be surprised by how much exists. You'll also see how inconsistent it is.

Pick one process. Choose the one where inconsistency costs you the most. The spec process that produces wildly different outputs depending on who writes the spec. The code review that catches different things depending on who's reviewing. The decision framework that lives in one person's head. Pick one.

Encode it. Work with the person who does it best. Capture not just the steps, but the judgment. What do they look for? What do they skip? What are the non-obvious criteria? Write it as a skill or a context document that an agent can follow.

Share it. Put it in a shared repository. Have the team use it. Get feedback. Iterate. The first version won't be perfect. The third version will be good. The tenth version will be something your organization depends on.

Assign an owner. Someone is responsible for keeping it current. Someone reviews it on a defined cadence. Someone decides when it needs to change. Without this, the document decays within weeks.

That's it. One process, encoded, shared, and owned. Do that, and you've started the migration from Level 1 to Level 2. Do it again next week. And the week after. Within a quarter, you'll have a fundamentally different relationship with your AI tools.

The path from personal AI to organizational AI isn't a platform initiative or a tool purchase. It's a discipline. And the organizations that develop it now will have an advantage that's very difficult to replicate later.

Ready to move beyond individual AI adoption?

If your team is stuck between "some people use AI" and "AI is part of how we operate," let's talk about what a pragmatic migration path looks like for your organization.

Schedule a Conversation