Turning Design Principles into an AI-Powered Skill

The Challenge

As our design team increasingly adopted AI-powered coding tools like Claude Code, Cursor, and Codex for rapid prototyping, we faced a dual challenge: How do we teach AI assistants to build with our design system correctly? And how do we ensure the prototypes they create meet our design principles, quality standards, and accessibility requirements? Each team member was essentially working with a blank-slate AI assistant that had no knowledge of our organizational design philosophy, craft frameworks, or design system conventions.

The Vision

We envisioned a future where every designer on our team had an AI assistant that could both build and evaluate—understanding how to use our design system components properly while also reviewing work against our quality frameworks. We needed two complementary capabilities: context for creation and criteria for evaluation. The goal was to democratize both our building knowledge and our review standards, making institutional expertise accessible to everyone from junior designers to AI assistants.

The Two-Part Solution

We developed a two-part approach that addresses both building and reviewing:

Part 1: AGENTS.md — A comprehensive guide that teaches AI assistants HOW to BUILD with our design system (Braid). It provides the context, rules, and references needed for creation.

Part 2: /design-review Skill — A Claude-based tool that REVIEWS and EVALUATES work against our design principles, quality framework, accessibility standards, and Braid compliance.

Together, these tools create a complete AI-assisted workflow: build with AGENTS.md context, then validate with /design-review feedback.

Part 1: AGENTS.md — Building Context

AGENTS.md serves as the 'brain' for AI assistants when building with our design system. This living document is written in a format that's both human-readable and machine-parseable, providing:

Design System Architecture: How Braid is structured and organized Component Guidelines: When to use which components and how to compose them properly Code Patterns: Preferred patterns for common design implementations Library References: Direct links to coded component artifacts Usage Rules: Dos and don'ts for using design system components Best Practices: Proven approaches for implementing designs Anti-patterns: Common mistakes to avoid and why

When a designer starts a project with Claude Code, Cursor, or Codex, they reference AGENTS.md in their workspace. The AI assistant then has full awareness of how to properly use our design system components—like having a design systems expert available during every coding session.

Part 2: /design-review Skill — Evaluation & Guidance

The /design-review skill is a Claude-based tool distributed as a .zip file that evaluates prototypes and codebases through a comprehensive three-phase workflow:

Phase 1: Context Gathering Identifies market (AU/NZ/ID/TH/HK/SG/PH/MY), language, platform (web/mobile/iOS/Android), and target audience to ensure appropriate evaluation criteria.

Phase 2: Comprehensive Audit Evaluates work against four distinct frameworks:

  • Design Principles: "Maximise the Experience" (immersive interfaces) and "Bring the Marketplace to Life" (vibrant, activity-focused design)
  • Quality Framework: Five pillars including simplicity, craft, innovation, consistency, and inclusivity
  • Braid Compliance: Validates the 80/20 component usage rule (80% Braid components, 20% custom)
  • Accessibility: Validates WCAG AA standards

Phase 3: Structured Reporting Delivers prioritized recommendations categorized as: 🔴 Critical — Must-fix issues affecting functionality or accessibility 🟡 Important — Significant improvements for quality and consistency 🟢 Opportunities — Enhancement suggestions for craft and innovation

Each finding includes specific file locations, recommended actions, and relevant guideline citations.

How It Works in Practice

The workflow creates a powerful build-review cycle:

1. Build with AGENTS.md Context Designer starts a project with Claude Code/Cursor, referencing AGENTS.md. The AI assistant knows how to properly use Braid components, following established patterns and best practices.

2. Review with /design-review Skill Once the prototype is ready, the designer runs /design-review to evaluate the work. The skill audits against all four frameworks and delivers structured feedback.

3. Iterate Based on Feedback The designer (with AI assistance) addresses findings, working through Critical issues first, then Important improvements, then exploring Opportunities.

4. Continuous Validation Throughout development, designers can run /design-review as often as needed, getting instant feedback without waiting for human review cycles.

It's like having both a design systems expert (AGENTS.md) and a senior design reviewer (/design-review) available 24/7 for every team member.

Empowering the Team

The impact on our team has been transformative. Junior designers now have access to both building knowledge and evaluation criteria at the same level as senior team members. A designer in Kuala Lumpur can build a prototype in the afternoon using AGENTS.md context, then immediately validate it with /design-review—no waiting for reviews the next morning from Sydney or feedback from distributed team members.

Designers spend less time second-guessing component usage or worrying about accessibility compliance. The AI handles the systematic checks, freeing designers to focus on creative problem-solving and strategic decisions. Code reviews have shifted from catching basic design system violations to discussing innovative approaches and user experience improvements.

Consistency at Scale

Before this two-part system, design quality varied based on individual experience and interpretation. A senior designer might catch accessibility issues that a junior designer missed. Team members in different offices might use Braid components differently.

Now, every designer—regardless of experience level or location—has access to the same building context and evaluation criteria. The /design-review skill applies identical standards whether you're a <a href="https://www.seek.com.au" target="_blank" rel="noopener noreferrer">Seek</a> designer in Australia, Singapore, Indonesia, Thailand, Hong Kong, or Malaysia. This doesn't stifle creativity—it ensures that creative energy goes into solving user problems rather than reinventing basic patterns or fixing preventable issues.

Shaping Conversations with AI

What's particularly powerful is how these tools shape the conversation between designers and their AI assistants. AGENTS.md teaches the AI to ask the right questions: 'Should we use a Card or CardLink here?' or 'This needs to work across all SEEK markets—let's ensure proper internationalization.'

The /design-review skill acts as a conversation guide during evaluation, directing attention to what matters: 'Let's examine this interaction against our accessibility standards' or 'How does this component choice align with the Maximise the Experience principle?'

These aren't just automated checks—they're structured dialogues that elevate the designer's thinking and decision-making process.

Market Considerations Built In

One of /design-review's strengths is its awareness of SEEK's multi-market reality. When evaluating work, it considers which markets the design will serve (AU, NZ, ID, TH, HK, SG, PH, MY) and applies appropriate standards for language, cultural context, and platform expectations. This prevents the common mistake of designing for one market and discovering issues when expanding to others.

A Living System

Both tools evolve with our organization. When design principles change, we update the /design-review skill's evaluation criteria. When Braid adds new components, we update AGENTS.md with usage guidance. Every team member's AI assistant immediately has access to the latest standards—no training sessions, no documentation hunts, no knowledge gaps.

We've also discovered that writing guidelines for AI consumption forces us to be more precise about our standards. Ambiguous rules like 'use good judgment' become specific criteria like 'maintain minimum 4.5:1 contrast ratio for text.' This clarity benefits both AI assistants and human designers.

The Future of Design Teams

This two-part approach represents a fundamental shift in how design teams work. Design principles and system knowledge shouldn't live in static documentation that designers hunt down and interpret inconsistently. They should be active, accessible intelligence that AI assistants apply during building and evaluation.

By separating building context (AGENTS.md) from evaluation criteria (/design-review), we've created a complete AI-assisted workflow that mirrors how experienced designers actually work: build with system knowledge, then review against quality standards.

This is more than automation—it's augmentation. We're not replacing designers with AI; we're equipping every designer with AI-powered expertise that makes them more effective, more consistent, and more focused on what humans do best: creative problem-solving.

Key Takeaways

Separate building from reviewing: Create tools for both context (how to build) and evaluation (how to assess) Make knowledge accessible: Give every team member AI-powered access to expert-level design system knowledge and review standards Enable continuous validation: Let designers get instant feedback without waiting for human review cycles Scale consistency: Apply uniform standards across teams, time zones, and experience levels Shape conversations: Use AI tools to guide designers toward better questions and decisions Iterate continuously: Keep both tools as living documents that evolve with your standards

The question isn't whether AI will change how design teams work—it's whether your team will proactively shape that change or reactively adapt to it. We chose to be proactive, and our two-part system (AGENTS.md + /design-review) is our blueprint for the future.

Back to Processes
Case StudiesProcessesStories
AboutWhy meSchedule a chat
ContactLinkedInInstagram

© 2026 Richard Simms. All rights reserved.