The Problem: AI Agents Don't Read Your Confluence
Every design team I know has documented their standards somewhere. Brand guidelines in a PDF. Design principles on a Confluence page. Component documentation in Storybook. Accessibility requirements in a shared Google Doc.
None of it is accessible to the AI agents that are increasingly writing your production code.
When a developer asks Claude Code, Cursor, or Codex to build a component, the model works from its training data and whatever's in the immediate codebase. It doesn't know your design principles have a specific position on information density. It doesn't know your team avoids tooltip patterns in favour of inline disclosure. It doesn't know that your accessibility requirements aren't aspirational — they're hard requirements that block deployment.
So it guesses. And its guesses are competent, generic, and wrong for your product.

AGENTS.md: Onboarding Docs for AI
The solution I landed on is an AGENTS.md file in the root of the codebase. It's a Markdown file — readable by both humans and AI agents — that contains the standards, patterns, and constraints that any contributor to the codebase needs to know. Including the non-human ones.
An effective AGENTS.md covers:
Design principles with real examples. Not "we value simplicity" — that's meaningless to a model. Instead: "We follow 'complex made simple.' This means: progressive disclosure over showing everything at once. One primary action per screen. No tooltips — use inline help text instead. When in doubt, remove the element."
Patterns we use and patterns we avoid. Positive and negative examples are the most powerful context you can give a model. "We use Braid's Stack and Columns for layout. We never use custom CSS grid for page-level layout. We use Dialog for confirmations, never browser confirm(). We avoid floating action buttons."
Accessibility rules that are not optional. Models treat accessibility as a nice-to-have unless you tell them otherwise. "All interactive elements must have visible focus indicators. Colour contrast minimum 4.5:1 for text, 3:1 for UI elements. Touch targets minimum 44×44px. These are deployment blockers, not suggestions."
The reasons behind key architectural choices. This is what separates good AGENTS.md files from checklists. When the model understands why a decision was made, it can apply the principle to novel situations. "We use server-side rendering for all job listing pages because search engine indexing directly affects candidate traffic. Client-side rendering is acceptable for authenticated dashboard views."
The Difference Was Immediate
After adding AGENTS.md to our codebase, the quality of AI-generated code shifted noticeably. Instead of correcting the same suggestions over and over — remove that tooltip, simplify that layout, use the Braid component — I was refining them. The baseline went up because the context was always there, not buried in a prompt I forgot to write.
The key insight: prompts are ephemeral; project files are persistent. Every time you write a careful prompt explaining your design standards, that context evaporates at the end of the session. An AGENTS.md file is always present. Every agent that touches the codebase inherits it automatically.
Claude Skills: The Same Idea, Applied to Design Review
I've extended this same principle beyond code generation into design review using Claude Skills.
A Claude Skill is a structured set of instructions and reference material that Claude loads when triggered. I built one called /design-review — a comprehensive design auditor that evaluates designs and codebases against SEEK's Design Principles, our Quality Framework, Braid design system compliance, and WCAG AA accessibility standards.
The skill encodes:
SEEK's Design Principles — "Maximise the Experience" (full viewport utilisation, layered information architecture, minimal chrome) and "Bring the Marketplace to Life" (real-time activity indicators, social proof, live counters).
The Quality Framework — five pillars that define design quality at SEEK: Complex made simple. Beautifully crafted. Purposefully innovative. Cohesive at every touchpoint. Embrace diversity.
Braid design system compliance — specific checks for component usage, spacing tokens, colour palette adherence, and justified custom patterns.
WCAG AA accessibility — contrast ratios, touch targets, focus indicators, semantic HTML, keyboard navigation, screen reader support, reduced motion.
When I run /design-review against a screenshot or a codebase, the output is a structured audit with prioritised recommendations mapped to specific principles. Critical issues, important fixes, and enhancement opportunities — each tied back to the framework that justifies it.
Same idea as AGENTS.md: set the standards once, get aligned feedback every time.
Why This Matters for Design Teams, Not Just Individual Designers
The value of machine-readable design standards compounds across a team.
Consistency across contributors. When five developers and three AI agents are all building components, the AGENTS.md file is the one source of truth they all share. A junior developer using Cursor gets the same design guardrails as a senior developer using Claude Code.
Faster design review cycles. The AI catches the mechanical issues — wrong component, accessibility violation, principle mismatch — before the design review. The human reviewer focuses on judgment calls: is this the right interaction pattern? Does this flow serve the user's mental model?
Explicit standards replace implicit knowledge. Every design team has unwritten rules. The patterns that "everyone just knows." AI agents surface these gaps immediately, because they can only follow what's documented. Writing an AGENTS.md file forces you to make the implicit explicit — which benefits human team members too.
Shared skills create shared language. When the whole team uses the same /design-review skill, the feedback uses consistent terminology. "This violates 'complex made simple'" becomes a shared reference point, not a subjective opinion.
How to Start
You don't need a sophisticated setup. Start with two things:
1. Write an AGENTS.md file for your codebase. Spend an hour with your design team documenting the ten most common mistakes AI agents make in your codebase. Write the correction for each one. Include positive and negative examples. Put the file in your repo root. Commit it.
2. Create a Claude Skill for your design review process. Take your design principles, quality framework, and accessibility requirements. Structure them as evaluation criteria with clear pass/fail signals. Use the skill in your next design review and iterate on the output.
I've open-sourced both as a starting point. The /design-review skill is available at designreview.cc — install it via Claude Code with npx design-review, or download the skill for Claude.ai. The AGENTS.md template for Braid-based interfaces is available at designreview.cc/AGENTS.md.
Fork them. Adapt them to your design system, your principles, your constraints. The specific content matters less than the practice: encoding your team's design standards in a format that AI agents can read, apply, and be held to.
The Bigger Picture
We're at an inflection point. AI agents are writing a growing share of production code. Designers who respond by writing more detailed Figma annotations are solving yesterday's problem. The code is being generated from context, not from redlines.
The designers who will have the most impact in this environment are the ones who shape that context: the AGENTS.md files, the Skills, the machine-readable standards that determine what "good" means before the first line of code is generated.
This isn't about controlling AI. It's about giving it the context to understand how your team actually works.