Model Context Protocol (MCP): a product designer's guide
AI product design·21 February 2026

Model Context Protocol (MCP): a product designer's guide

Model Context Protocol (MCP) is an open standard that defines how AI applications and agents connect to external data sources and tools. Think of it as USB-C for AI context: a universal connector that lets LLM-powered applications reach out to your file system, databases, APIs, and services without bespoke integration work for each combination.

What is Model Context Protocol?

MCP is an open protocol developed by Anthropic that standardises how AI agents access external context. Before MCP, connecting an LLM to a tool (a database, a file, a calendar) required custom prompt engineering and fragile integration code for every combination.

MCP defines a standard interface:

  • MCP servers — any tool or data source that exposes itself via the protocol
  • MCP clients — AI applications and agents that consume those servers
  • Resources — data the server exposes (files, records, documents)
  • Tools — actions the agent can invoke (query a database, send a message, read a file)
  • Prompts — reusable prompt templates the server provides

For designers: MCP is the infrastructure layer that determines what an AI agent can know and do. It is the boundary between what is possible and what is not.

Why MCP matters for product designers

Most designers think of AI integration as a UI problem: where does the chat box go? MCP reveals that it is fundamentally a context problem.

The quality of an AI product is determined by the quality and breadth of context the model can access. An AI assistant that can only see what you type to it is a calculator. One that can access your calendar, your documents, your communication history, and your codebase is an agent.

As a designer, understanding MCP changes how you think about:

  • Feature scope — what can the agent actually know at decision time?
  • Trust and permissions — which context sources does the user consent to sharing?
  • Failure modes — what happens when an MCP server is unavailable or returns bad data?
  • Progressive disclosure — how do you surface what context the agent is using without overwhelming the user?

Designers who leave these questions to engineers will find their AI products feel shallow and untrustworthy.

MCP in practice: how I use it

I use MCP in my own workflow as an AI-native builder. Connecting Claude to local MCP servers gives me agents that can read and write files, query databases, interact with APIs, and reason about my actual work — not just text I have pasted into a chat window.

In product design work, this means:

  • Design decisions that account for what the AI can actually access
  • Prototype flows that show the agent using real context, not mock data
  • Clearer user permission models, because MCP makes context access explicit

The shift is from designing chat interfaces for LLMs to designing agentic systems that have structured access to the world.

Design patterns for MCP-powered products

Context disclosure — show users which MCP servers are connected and what data they expose. Not a settings screen buried three levels deep: a live, ambient signal ("Connected to: Calendar, Email, Files").

Permission boundaries — design clear, granular consent flows for each context source. MCP makes it technically possible to scope permissions; good UX makes it legible.

Graceful degradation — design every agentic feature to work partially if an MCP server is unavailable. The product should degrade gracefully, not fail entirely.

Reasoning transparency — when the agent uses a specific context source to inform a response, surface it: "Based on your calendar for next week...". This is not a technical footnote; it is a trust signal.

Back to Stories
Case StudiesProcessesStories
AboutWhy meSchedule a chat
ContactLinkedInInstagram

© 2026 Richard Simms. All rights reserved.