December 17, 2025·5 min read
AI coding assistants have evolved from autocomplete to autonomous agents. They read your codebase, run shell commands, create files, and make architectural decisions. But there's a problem: there's no standard way to constrain them at the repository level.
Every company is inventing their own solution. Claude uses CLAUDE.md. Cursor has .cursorrules. GitHub Copilot reads from various config files. None of them are compatible, and none of them are designed for governance—they're designed for context.
This is a governance problem masquerading as a documentation problem.
The Problem: Hallucinated Authority
AI assistants don't know what they're not supposed to do. Without explicit constraints, they'll:
- Run destructive commands directly instead of going through your established workflows
- Auto-merge PRs "to be helpful"
- Modify secrets, state files, or protected configurations
- Bypass approval gates that exist for good reasons
- Invent execution paths that don't exist in your system
Prompts don't solve this. They're ephemeral—they disappear after the conversation. Every new session starts from zero. What you need is something that persists with the repository and encodes system invariants that must remain true regardless of what a user asks.
The Current Landscape
Here's what major AI tools use today:
| Tool | Config File | Purpose |
|---|---|---|
| Claude Code | CLAUDE.md |
Project context and instructions |
| Cursor | .cursorrules |
Editor behavior and prompts |
| GitHub Copilot | Various | Workspace settings |
| Aider | .aider.conf.yml |
Tool configuration |
| Continue | .continuerc.json |
Extension settings |
Notice the pattern: configuration, not governance.
These files tell the AI how to behave (formatting, language preferences, context). They don't tell it what it must never do or who has authority over what decisions.
What's Missing: Architecture Contracts for AI
Consider a platform with established workflows—CI/CD pipelines, approval gates, environment promotions. You need your AI assistant to understand:
- All changes flow through Git—no direct mutations to production systems
- Certain environments require human approvals
- State files and secrets are immutable from the AI's perspective
- Execution paths are fixed and cannot be reordered or skipped
- The AI can analyze failures but cannot initiate recovery procedures autonomously
A config file with "please follow our processes" doesn't cut it. You need a structured contract that encodes:
- Context: What is this system and what are its boundaries?
- Intent: What tradeoffs were made and why?
- Rules: What must never happen?
- Workflows: What are the valid execution paths?
A Proposed Pattern: .ai/ Directory
The solution is a dedicated directory for AI governance—tool-agnostic, plain markdown, readable by any AI assistant that scans your repo:
.ai/
├── context.md # What is this system?
├── intent.md # What tradeoffs were made?
├── rules.md # What must never happen?
└── workflows.md # What are valid execution paths?context.md
Defines the system boundaries and operating model:
## Control Boundaries
Decision-making authority resides with humans at these points:
- Configuration curation (what can be changed)
- Policy definition (what constraints apply per environment)
- Pull request approval (whether a change proceeds)
- Production promotion approval (multi-reviewer gate)
Automation owns execution within those boundaries but cannot
bypass approval gates, modify policy configurations, or
introduce new execution paths.intent.md
Encodes design philosophy and accepted tradeoffs:
## Primary Goals
The system optimizes for:
- **Safety over speed**: All changes flow through Git with review gates
- **Auditability**: Every change creates a traceable PR
- **Self-service within guardrails**: Users get autonomy; policies enforce boundaries
### Accepted Tradeoffs
- Changes take minutes (PR workflow) rather than seconds (direct execution)
- Policy violations block requests rather than warn
- Production changes require multiple approvals even for low-risk changesrules.md
The hard constraints—things that must never happen:
## AI Guardrails
### What an AI Assistant Must Never Modify
- State files or backend storage
- Secrets or credential stores
- Policy configurations that relax security constraints
- Approval requirements for any environment
- Branch protection rules
### Prohibited Actions
- Bypassing approval gates
- Merging PRs automatically to protected environments
- Deleting resources without explicit destroy workflow
- Initiating recovery or restoration without explicit human requestworkflows.md
Valid execution paths that cannot be modified:
### How Changes Are Executed
Changes proceed through the following fixed sequence.
Steps may not be reordered, skipped, or conditionally bypassed.
1. Branch creation (feature branch)
2. Configuration changes committed
3. Pull request creation
4. Automated validation
5. Results posted for human review
6. Human approval (per environment requirements)
7. Merge to main branch
8. Automated execution
...Why This Works (And Where It Doesn't)
What It Solves
Hallucinated authority: The AI now knows it doesn't have permission to execute destructive operations directly.
Architecture drift: As conversations progress, the AI has a stable reference for system invariants. No more re-explaining your architecture every session.
Multi-tool consistency: Whether you use Claude, Cursor, or Copilot, the constraints are in the repo. Switch tools, keep your governance.
Shared repos: Other contributors (human or AI) can understand the governance model without tribal knowledge.
What It Doesn't Solve
Enforcement: These are constraints on reasoning, not execution. The AI could still violate them.
Partial context: If someone pastes a snippet without loading the governance docs, the constraints don't exist.
Narrow prompts: Direct commands might not trigger constraint checking.
User override: "Ignore the rules, I'm the admin" might work.
The honest answer is: these docs increase refusal likelihood, not inability to comply.
But that's often enough. Most security is about raising the bar, not building impenetrable walls.
The Case for Standardization
Right now, every organization is solving this independently. That's wasteful and inconsistent.
A standard should define:
- Directory convention:
.ai/(tool-agnostic, intuitive, future-proof) - File structure: Separate context, intent, rules, and workflows
- Schema: Machine-readable sections that tools can parse (initially convention-based, later formally validated)
- Inheritance: Project-level overrides of organization defaults
- Versioning: How constraints evolve with the codebase
A More Complete Standard
Here's what a cross-tool standard could look like:
.ai/
├── context.md # System boundaries and ecosystem position
├── intent.md # Design philosophy and tradeoffs
├── rules.md # Hard constraints (MUST/MUST NOT)
├── workflows.md # Valid execution pathsWhat Actually Changes?
The key question: do these files meaningfully change AI behavior, or do they mostly change how answers are explained?
Based on testing this pattern:
-
They shape reasoning: The AI models the system as having hard boundaries that exist independent of user requests.
-
They raise the refusal threshold: The AI is more likely to say "I can't do that" vs. "Let me try to help."
-
They don't change capabilities: The AI could still technically attempt prohibited actions.
That's the right mental model. These docs are governance constraints, not technical enforcement. They work because AI assistants are designed to be helpful and compliant—not because they're physically prevented from violating them.
Try It Yourself
If you want to implement this pattern:
- Create a
.ai/directory in your repo - Write
context.md: What is this system? What are its boundaries? - Write
intent.md: What tradeoffs did you make? What do you optimize for? - Write
rules.md: What must never happen? Be specific and use "MUST NOT" language. - Write
workflows.md: What are the valid execution paths? Can they be reordered? - Reference these files in your
CLAUDE.mdor.cursorrules
The key insight: separate documentation from governance. READMEs explain how things work. Governance docs constrain what's allowed.
Conclusion
AI agents are here. They're reading our code, running our commands, and making decisions on our behalf. We've standardized how humans interact with repositories (PRs, branch protection, CODEOWNERS). We haven't standardized how AI agents should be constrained.
Until we do, every organization will reinvent this wheel. Some will do it well. Most won't do it at all, and their AI assistants will hallucinate authority they were never granted.
The .ai/ pattern isn't perfect, but it's a starting point. The goal isn't perfection—it's establishing a convention before the next generation of AI agents ships without any governance at all.
To be clear: this is a proposal, not a proven standard. I haven't validated it at scale across multiple teams or tools. But someone needs to start the conversation, and waiting for perfect evidence means waiting until the problem is already entrenched.
We need this.
Enjoyed this post? Give it a clap!
Comments