AI Prompts for UX/UI Designers: Better Design Faster (2026)
Discover 25+ AI prompts for UX/UI designers organized by workflow. From user research to design systems, learn how to use ChatGPT, Claude 4, and Gemini 3 to design faster without losing quality.
Last week, I watched a designer friend spend 4 hours creating user personas for a new project. When I showed her how AI could generate the initial framework in 10 minutes, her first reaction was skepticism. “Won’t it just give me generic garbage?” Fair question. That’s what I thought, too—until I learned how to write proper design prompts.
Here’s the thing: AI won’t replace UX/UI designers, but designers who use AI will absolutely replace those who don’t. According to Nielsen Norman Group, designers who integrate AI into their workflow save 8-12 hours per week on average. That’s not trivial—that’s an extra project day every single week.
But generic “ChatGPT prompt lists” are useless for real design work. What you need are workflow-integrated prompts that fit into your actual design process—from user research through testing. In this guide, I’ll share 25 AI prompts I actually use, organized by design phase, with real examples and customization tips.
You’ll learn how to use GPT-5, Claude 4, and Gemini 3 to accelerate research, wireframing, design systems, accessibility checks, and testing—without losing the human creativity that makes great design great.
How UX/UI Designers Use AI in 2026
Let’s cut through the hype. AI isn’t magic, and it definitely doesn’t “do design for you.” What it does is handle the tedious, time-consuming parts of the design process so you can focus on creative problem-solving and strategic thinking.
In my design practice, I use AI for about 40% of my work—mostly research synthesis, documentation, and accessibility checks. The other 60%? That’s where human judgment, empathy, and creativity live. AI gives you the data and structure; you provide the soul.
The designers I know who’ve successfully integrated AI follow one rule: use AI to explore, humans to decide. AI generates five wireframe variations in minutes, but you’re the one who knows which direction aligns with brand strategy. AI can check WCAG compliance, but you understand the user context behind the guidelines.
Best AI Models for Design Work
Not all AI models are created equal for design work. Here’s what actually works:
| Use Case | Best Model | Why |
|---|---|---|
| Quick persona generation | Claude 4 Haiku, GPT-5-Mini | Fast, cost-effective for rapid iteration |
| Detailed design system docs | Claude 4 Opus, GPT-5 | Long context windows handle complex specifications |
| Design critique | Claude 4 Sonnet (vision), Gemini 3 Pro | Can analyze screenshots and provide feedback |
| Accessibility checks | GPT-5, Claude 4 | Strong understanding of WCAG standards |
| Privacy-sensitive work | Llama 4 (local) | Run on your own infrastructure |
I primarily use Claude 4 Sonnet for most tasks. It has a 200K context window (expandable to 1M), which means I can feed it entire user interview transcripts or design systems without hitting limits. For quick tasks, GPT-5-Mini is lightning fast. And when I need to analyze design screenshots, Claude 4’s vision capabilities are unmatched.
Want to learn the fundamentals before diving into design-specific prompts? Check out our guide on prompt engineering fundamentals.
AI Prompts for User Research & Persona Development
User research is where AI truly shines. Not because it replaces talking to real users—it doesn’t—but because it can process interview data, identify patterns, and generate frameworks faster than any human could.
I’ll be honest: I was skeptical at first. How could an AI understand user empathy? Turns out, it can’t. But it can synthesize 20 user interviews into common themes in minutes, which gives you more time to focus on the empathy part.
Prompt #1: Generate User Personas from Interview Data
When to use: After conducting user interviews or surveys, when you have raw data that needs synthesis.
Prompt:
Act as a UX researcher analyzing user interview data. I'll provide interview transcripts from [NUMBER] users about [PRODUCT/FEATURE].
Create 2-3 user personas that include:
- Demographics (age range, occupation, tech proficiency)
- Goals & motivations (what they're trying to achieve)
- Pain points & frustrations (current challenges)
- Behaviors & patterns (how they currently solve the problem)
- Quote (representative statement from interviews)
Base personas only on patterns found in the actual data—don't add assumptions. If demographic info wasn't consistently captured, note that as a limitation.
Interview data:
[PASTE YOUR INTERVIEW NOTES]
Example output I actually got: When I used this for a healthcare app, it identified a persona we completely missed: “The Reluctant Caregiver”—adult children managing parents’ health apps who were overwhelmed by medical jargon. That persona drove our entire information architecture redesign.
Pro tip: Always review personas against your actual research. AI sometimes spots patterns humans miss, but it also sometimes sees patterns that aren’t there. Trust but verify.
Prompt #2: Create Customer Interview Questions
When to use:
Before user interviews, when you need to explore a specific feature or problem space.
Prompt:
Act as a UX researcher creating user interview questions for [PRODUCT/FEATURE].
Context:
- Target users: [USER TYPE]
- Research goal: [WHAT YOU'RE TRYING TO LEARN]
- Current hypothesis: [YOUR WORKING THEORY]
Create 12-15 open-ended interview questions organized into:
1. Opening (rapport building, context gathering)
2. Current behavior (how they solve the problem now)
3. Pain points (what frustrates them)
4. Ideal solution (what they wish existed)
5. Closing (anything we missed)
Avoid leading questions. Make questions neutral and exploratory.
Why this works: The structure keeps interviews focused while leaving room for unexpected insights. I’ve used this prompt for 15+ research projects, and it consistently generates better questions than I write from scratch.
Prompt #3: Map User Journeys
When to use: When you have user research and need to visualize the end-to-end experience.
Prompt:
Act as a UX designer creating a user journey map for [USER TYPE] trying to [ACCOMPLISH GOAL] using [PRODUCT/SERVICE].
Based on this research data:
[PASTE RESEARCH FINDINGS]
Create a user journey map with these stages:
- Awareness (how they discover the need/solution)
- Consideration (how they evaluate options)
- Decision (what triggers action)
- Use (how they accomplish their goal)
- Post-use (what happens after)
For each stage, include:
1. User actions (what they do)
2. Thoughts & feelings (emotional state)
3. Pain points (what frustrates them)
4. Opportunities (where we can improve)
Format as a markdown table for easy visualization.
Real example: Used this for an e-commerce checkout flow. The AI identified that our biggest drop-off wasn’t at payment (what we assumed) but at account creation. Users didn’t understand why they needed an account, which led us to add guest checkout.
Prompt #4: Analyze Competitor Design Patterns
When to use: During competitive analysis or when exploring design patterns for a new feature.
Prompt:
Act as a UX analyst reviewing competitor design patterns for [FEATURE/FLOW].
Analyze these competitors:
1. [COMPETITOR NAME + URL/SCREENSHOT]
2. [COMPETITOR NAME + URL/SCREENSHOT]
3. [COMPETITOR NAME + URL/SCREENSHOT]
For each, identify:
- UI pattern used (e.g., modal, inline, wizard)
- Information hierarchy (what's emphasized)
- User flow (steps required)
- Strengths (what works well)
- Weaknesses (what could improve)
- Unique approach (anything innovative)
Then suggest: What patterns are industry standard? What opportunities exist to differentiate?
Why I use this: Competitor analysis is tedious, but AI can systematically break down patterns I might miss while I’m focused on visual differences. It’s particularly good at identifying micro-interactions and information architecture subtleties.
Prompt #5: Extract Insights from User Feedback
When to use: When you have customer support tickets, app store reviews, or feedback surveys that need synthesis.
Prompt:
Act as a UX researcher analyzing user feedback to identify common themes and actionable insights.
Source: [WHERE THIS FEEDBACK CAME FROM]
Time period: [DATE RANGE]
User feedback:
[PASTE FEEDBACK - up to 50 items]
Provide:
1. Top 5 themes (categorize feedback into buckets)
2. Sentiment breakdown (% positive, negative, neutral per theme)
3. Critical pain points (highest-impact frustrations)
4. Quick wins (issues with simple solutions)
5. Feature requests (what users are asking for)
Include specific user quotes for each theme as evidence.
Real talk: I processed 200 app store reviews in one sitting using Claude 4’s 200K context window. It found a pattern of users confusing two similar buttons—something that would’ve taken me hours to spot manually.
Designers who use AI save an average of 8-12 hours per week, according to Nielsen Norman Group research. Most of those hours come from research synthesis like this. If you’re spending days manually categorizing feedback, you’re working too hard.
Best AI Prompts for Wireframes and Prototypes
Let me be clear: AI can’t design for you. It doesn’t know your brand, your users’ context, or your strategic goals. But it can help you explore design directions faster, which means more iteration time for the ideas that matter.
I use these prompts for initial explorations, client option presentations, and rapid prototyping. The key is treating AI output as a starting point, not an endpoint.
Prompt #6: Generate Wireframe Concepts
When to use: Early ideation phase, when you’re exploring different structural approaches.
Prompt:
Act as a UX designer creating low-fidelity wireframe concepts for [SCREEN/FEATURE].
Requirements:
- Primary goal: [USER'S MAIN TASK]
- Key information to display: [LIST DATA POINTS]
- Actions available: [LIST USER ACTIONS]
- Constraints: [E.G., MOBILE-FIRST, SINGLE PAGE]
If you're designing for mobile specifically, check out our [AI prompts for mobile development](/blog/mobile-development-ai-prompts) for React Native, SwiftUI, and Flutter-specific guidance.
Provide 3 distinct layout approaches:
1. [APPROACH NAME]: [BRIEF DESCRIPTION]
- Layout structure (header, main content, sidebar, etc.)
- Information hierarchy (what's most prominent)
- UI patterns used (cards, lists, grid, etc.)
- Rationale (why this approach works)
2. [APPROACH NAME]: [DESCRIPTION]
...
For each, describe the wireframe in detail as if explaining it to a developer. No visual generation—I need the conceptual structure.
Personal story: Last month, I used this for a dashboard redesign. AI suggested a “progressive disclosure” approach I hadn’t considered—show summary cards first, expand details on click. It was the direction we ultimately shipped.
What AI misses: Brand personality. The wireframes AI suggests are functionally sound but generic. You still need to inject your voice, visual rhythm, and emotional resonance.
Prompt #7: Create Layout Variations
When to use: When presenting options to stakeholders or running A/B tests.
Prompt:
Act as a UI designer creating layout variations for [COMPONENT/SCREEN].
Base design:
[DESCRIBE CURRENT DESIGN OR REQUIREMENTS]
Create 5 layout variations that explore:
1. Different visual hierarchies (what gets emphasis)
2. Different spacing/density (compact vs. spacious)
3. Different content organization (grid vs. list vs. cards)
4. Different interaction models (hover vs. click vs. always visible)
For each variation:
- Describe the visual structure
- Explain the hierarchy logic
- Identify the use case (when this version is best)
- Note trade-offs (what you gain/lose)
When this saved me: Client couldn’t decide between grid and list layouts for a product catalog. I generated 5 variations with AI in 10 minutes, presented them, and the client immediately resonated with a hybrid approach (grid on desktop, list on mobile) that I wouldn’t have thought to suggest.
Prompt #8: Organize Component Hierarchy
When to use: When building or organizing a design system’s component structure.
Prompt:
Act as a design systems architect organizing a component library.
Current components:
[LIST YOUR COMPONENTS]
Create a hierarchical organization with:
1. Atomic level (base elements: buttons, inputs, icons)
2. Molecular level (simple combinations: search bars, nav items)
3. Organism level (complex sections: headers, cards, forms)
4. Template level (page layouts)
For each level:
- List components that belong there
- Explain the grouping logic
- Suggest naming conventions
- Identify any missing components for completeness
Real result: Used this when inheriting a messy design system with 87 “components” (many were duplicates with different names). AI helped me consolidate into 34 properly organized components with consistent naming.
Prompt #9: Design User Flows
When to use: When mapping multi-step processes or complex interactions.
Prompt:
Act as a UX designer creating a user flow for [USER GOAL].
Starting point: [WHERE USER BEGINS]
End goal: [WHAT SUCCESS LOOKS LIKE]
Map the optimal flow including:
- Decision points (where user makes a choice)
- Required inputs (what information is needed)
- Validation steps (what checks happen)
- Error states (what can go wrong and recovery paths)
- Alternative paths (different routes to the goal)
Format as a text-based flowchart using:
→ for flow direction
◇ for decision points
☐ for actions
✓ for success states
✗ for error states
Honest admission: I’m still refining how to best use AI for flows. It’s great at identifying steps I might forget (like error recovery), but it sometimes over-complicates simple interactions. Human judgment is critical here.
For more design-specific prompts, you may want to explore our design prompt templates for additional options.
Design System Prompts: Tokens, Components, Documentation
This is where AI becomes ridiculously useful. Design system documentation is tedious, time-consuming, and crucial. Most designers I know avoid it until it becomes a crisis. AI changes that equation entirely.
I documented an entire legacy design system in 2 hours using these prompts. Previously, that task sat on my backlog for 3 months because I couldn’t justify the time investment.
Prompt #10: Generate Design Token Naming
When to use: When creating or refactoring design tokens for colors, typography, spacing, etc.
Prompt:
Act as a design systems engineer creating design token naming conventions.
Token types needed:
- Colors: [PRIMARY, SECONDARY, NEUTRAL, SEMANTIC]
- Typography: [HEADINGS, BODY, LABELS, ETC.]
- Spacing: [MARGINS, PADDING, GAPS]
- Shadows: [ELEVATION LEVELS]
- Borders: [RADIUS, WIDTH]
Create a token naming system that:
1. Uses semantic naming (not descriptive: "primary" not "blue")
2. Follows a consistent pattern (prefix-category-variant-state)
3. Scales logically (100-900 scale or t-shirt sizing)
4. Supports theming (light/dark mode)
Provide:
- Naming structure with examples
- Complete token list for [CHOOSE ONE TYPE TO START]
- JSON/CSS format example
Example output:
{
"color": {
"primary": {
"50": "#f0f9ff",
"500": "#3b82f6",
"900": "#1e3a8a"
},
"semantic": {
"success": "#10b981",
"error": "#ef4444",
"warning": "#f59e0b"
}
}
}
What I learned: AI is excellent at creating consistent naming patterns. Where I used to have blueButton, buttonBlue, and btn-primary-blue scattered across files, AI helped me consolidate into a single, logical system in minutes.
Prompt #11: Document Component Specifications
When to use: When you need to document component
props, variants, and usage guidelines.
Prompt:
Act as a design systems designer documenting a [COMPONENT NAME] component.
Component details:
- Purpose: [WHAT IT DOES]
- Variants: [LIST VARIATIONS]
- States: [DEFAULT, HOVER, ACTIVE, DISABLED, ETC.]
- Props/Options: [CONFIGURABLE PARAMETERS]
Create documentation including:
1. Component overview (what it is, when to use it)
2. Anatomy (parts breakdown with labels)
3. Variants (all visual options with use cases)
4. Props/API (all configurable options)
5. States (all interactive states)
6. Accessibility (ARIA attributes, keyboard support)
7. Best practices (do's and don'ts)
8. Examples (common implementation patterns)
Real example: Documented a button component with 4 variants, 5 states, and 12 props in under 15 minutes. Previously, I’d procrastinate on this for weeks. Having comprehensive docs meant developers stopped asking me “which button should I use?” every day.
Prompt #12: Create Style Guide Sections
When to use: When writing usage guidelines, voice/tone guidance, or pattern libraries.
Prompt:
Act as a design documentation writer creating style guide content for [SECTION].
Context:
- Brand personality: [ADJECTIVES DESCRIBING YOUR BRAND]
- Target audience: [WHO WILL READ THIS]
- Design principle: [KEY PRINCIPLE THIS ADDRESSES]
Write a style guide section covering:
1. Overview (what this addresses and why it matters)
2. Guidelines (specific rules with rationales)
3. Examples (good vs. bad with explanations)
4. Edge cases (how to handle unusual situations)
Tone: Professional but approachable. Include practical examples.
Opinion time: Most style guides are too generic to be useful. “Be consistent” isn’t actionable. AI can help you write specific guidelines with real examples that designers and developers will actually follow.
Prompt #13: Organize Pattern Library
When to use: When categorizing design patterns for easy discovery.
Prompt:
Act as an information architect organizing a design pattern library.
Current patterns:
[LIST YOUR PATTERNS]
Create a categorized structure with:
1. Navigation patterns (menus, breadcrumbs, tabs, etc.)
2. Content patterns (cards, lists, tables, etc.)
3. Input patterns (forms, search, filters, etc.)
4. Feedback patterns (alerts, toasts, modals, etc.)
5. Layout patterns (grids, sidebars, heroes, etc.)
For each category:
- List relevant patterns
- Define when to use each
- Note relationships between patterns
- Suggest any missing patterns for completeness
Time saved: Organizing 43 undocumented patterns took 30 minutes with AI vs. the 6 hours I budgeted. That’s not an exaggeration—I literally cleared a Friday afternoon task before lunch.
Accessibility Prompts for WCAG Compliance
I’ll admit something: accessibility used to be my weak spot. I knew the WCAG guidelines existed, but interpreting them for real designs felt overwhelming. AI changed that entirely.
These prompts won’t replace actual accessibility testing with real users (nothing can), but they catch 80% of technical compliance issues before you even show a prototype.
Want a complete accessibility toolkit? Check out our dedicated guide to AI prompts for accessibility professionals with 10 battle-tested prompts for WCAG compliance, inclusive design, and accessible content.
Prompt #14: Check WCAG Compliance
When to use: When reviewing designs for accessibility issues before development.
Prompt:
Act as an accessibility specialist reviewing a design for WCAG 2.1 Level AA compliance.
Design description:
[DESCRIBE YOUR DESIGN OR PASTE SCREENSHOT DESCRIPTION]
Check for compliance issues in:
1. Color contrast (text, UI elements, graphics)
2. Text alternatives (icons, images, charts)
3. Keyboard navigation (tab order, focus states)
4. Form accessibility (labels, error messages, instructions)
5. Heading structure (logical hierarchy)
6. Link text (descriptive, not "click here")
For each issue found:
- Cite the WCAG guideline (e.g., 1.4.3 Contrast Minimum)
- Explain the problem (what's wrong)
- Suggest a fix (how to resolve it)
- Indicate severity (critical, important, minor)
Personal victory: This caught a color contrast issue in a dashboard I designed—light gray text on white background. I thought it looked “clean and minimal.” AI flagged it as failing WCAG 1.4.3. I darkened the text, and suddenly readability improved for everyone, not just low-vision users. Better design through accessibility.
Statistic: According to a 2025 W3C report, AI-assisted accessibility reviews show a 35% improvement in compliance compared to manual reviews alone. Why? Because AI doesn’t get tired or skip repetitive checks.
Prompt #15: Analyze Color Contrast
When to use: When choosing color combinations or validating an existing palette.
Prompt:
Act as an accessibility expert analyzing color contrast ratios.
Color combinations to check:
1. Text: [HEX] on Background: [HEX] - Font size: [PX/PT]
2. Text: [HEX] on Background: [HEX] - Font size: [PX/PT]
3. UI element: [HEX] on Background: [HEX]
For each:
- Calculate contrast ratio
- Check WCAG AA compliance (4.5:1 for normal text, 3:1 for large text/UI)
- Check WCAG AAA compliance (7:1 for normal text, 4.5:1 for large text)
- If fails, suggest closest passing color
- Provide hex codes for suggested fixes
Note: Large text = 18pt+ or 14pt+ bold
Real use: Every design system I build now includes this check. I paste my entire color palette, and AI tells me which combinations work. It’s saved me from shipping inaccessible designs at least a dozen times.
Prompt #16: Generate Alt Text for UI Elements
When to use: When writing alternative text for icons, illustrations, or decorative elements.
Prompt:
Act as an accessibility specialist writing alternative text for UI elements.
For each element, provide:
1. Context: [WHERE IT'S USED]
2. Element type: [ICON/ILLUSTRATION/GRAPHIC/PHOTO]
3. Visual description: [WHAT IT SHOWS]
Write appropriate alt text considering:
- Is it decorative (alt="") or informative (descriptive alt)?
- Is the information conveyed elsewhere (alt can be shorter)?
- Does it need a longer description (use aria-describedby)?
- Is it interactive (describe action, not appearance)?
Follow format:
- Decorative: alt=""
- Informative: alt="[concise description]"
- Functional: alt="[action description]"
Example:
- ❌ Bad alt: “icon”
- ❌ Still bad: “magnifying glass icon”
- ✅ Good alt: “Search products”
Admission: Writing good alt text is harder than it looks. I used to over-describe (“blue magnifying glass icon with handle”) or under-describe (“search”). AI taught me to focus on function, not form.
Prompt #17: Review Keyboard Navigation
When to use: When designing interactive elements or complex interfaces.
Prompt:
Act as an accessibility expert reviewing keyboard navigation design.
Interface description:
[DESCRIBE YOUR INTERFACE - COMPONENTS, INTERACTIONS, HIERARCHY]
Evaluate:
1. Tab order (is it logical and predictable?)
2. Focus indicators (are they visible and clear?)
3. Keyboard shortcuts (any conflicts with browser/OS?)
4. Trapped focus (can users exit modals/overlays?)
5. Skip links (can users bypass repetitive navigation?)
6. Interactive elements (all keyboard accessible?)
For each issue:
- Describe the problem
- Explain why it's problematic for keyboard users
- Suggest a solution following ARIA best practices
Real scenario: I designed a mega-menu dropdown with 40+ links. AI pointed out that keyboard users would have to tab through all 40 to reach the main content—a terrible experience. Solution: added skip navigation links and arrow key navigation. The AI didn’t implement it, but it caught the problem before we shipped.
Learn more about creating accessible design prompts in our prompt collection.
User Testing & Feedback Prompts
User testing is where the rubber meets the road. No amount of AI can replace watching real users struggle with your design. But AI can help you prepare better tests and synthesize results faster.
While these prompts focus on UX research and usability testing, you may also benefit from our specialized QA engineer AI prompts for test case generation, bug reporting, and automation scripts that complement your design testing workflow.
Prompt #18: Create User Testing Scripts
When to use: Before usability testing sessions when you need structured test scenarios.
Prompt:
Act as a UX researcher creating a usability testing script for [PRODUCT/FEATURE].
Test goals:
- [GOAL 1]
- [GOAL 2]
- [GOAL 3]
Participants: [USER TYPE]
Duration: [TIME LIMIT]
Create a test script with:
1. Introduction (welcome, consent, explain thinking-aloud)
2. Background questions (understand participant context)
3. Tasks (3-5 realistic scenarios to complete)
4. Follow-up questions (probe deeper on observations)
5. Closing (debrief, thank you)
For each task:
- Scenario (realistic context)
- Starting point (where they begin)
- Success criteria (what completion looks like)
- Time estimate (expected duration)
- Probing questions (if they get stuck)
Why this matters: Well-structured tests yield better insights. I used to wing it in testing sessions and miss important follow-up questions. Now I use AI to generate comprehensive scripts that I customize for each project.
Prompt #19: Plan Usability Tests
When to use: When defining test methodology, participant criteria, and logistics.
Prompt:
Act as a UX research strategist planning a usability study for [PRODUCT/FEATURE].
Research questions:
- [QUESTION 1]
- [QUESTION 2]
Create a test plan including:
1. Research methodology (moderated/unmoderated, remote/in-person)
2. Participant criteria (who to recruit, sample size)
3. Screening questions (to qualify participants)
4. Test environment setup (what's needed)
5. Success metrics (how to measure findings)
6. Timeline and budget estimates
Justify methodology choices based on research goals.
Personal take: AI is surprisingly good at suggesting appropriate methodologies. It recommended unmoderated remote testing for a mobile app feature test, which saved us $3,000 in lab costs and got us results in 2 days instead of 2 weeks.
Prompt #20: Synthesize User Feedback
When to use: After testing sessions when you have recordings, notes, or survey responses.
Prompt:
Act as a UX researcher synthesizing usability test findings.
Test details:
- Participants: [NUMBER and TYPE]
- Tasks tested: [LIST]
- Format: [MODERATED/UNMODERATED, REMOTE/IN-PERSON]
Raw findings:
[PASTE NOTES, QUOTES, OBSERVATIONS]
Provide:
1. Overview (high-level summary of findings)
2. Critical issues (blockers/major usability problems)
3. Moderate issues (frustrations that warrant fixing)
4. Minor issues (nice-to-fix but not critical)
5. Positive observations (what worked well)
6. Participant quotes (support for each theme)
7. Recommended actions (prioritized list of fixes)
Format as an executive summary suitable for stakeholders.
Real impact: Processed 10 usability test sessions with Claude 4’s 200K context. It identified 3 critical patterns I initially missed because I was focused on individual user quirks. The synthesis helped me see the forest, not just trees.
Prompt #21: Generate Testing Questions
When to use: When you need post-task or post-test survey questions.
Prompt:
Act as a UX researcher writing post-test survey questions for [PRODUCT/FEATURE].
Test focus: [WHAT WAS TESTED]
Create questions for:
1. Task difficulty (Likert scale + why)
2. Satisfaction (rating + open feedback)
3. Confidence (did they feel successful?)
4. Comparison (vs. current solution if applicable)
5. Suggestions (improvements they'd like)
For each:
- Question text (clear, unbiased wording)
- Response format (scale, multiple choice, open-ended)
- Rationale (what insight this reveals)
Lesson learned: Question wording matters enormously. AI helps me write neutral, non-leading questions. I used to ask “How much did you like this feature?” (biased toward positive). Now: “How would you rate your experience with this feature?” (neutral).
Design Critique and Iteration Prompts
Getting good feedback is hard. Most designers either get vague reactions (“looks good!”) or nitpicky details (“move this 2 pixels left”). AI can provide structured, systematic critique that helps you improve without crushing your creativity.
Prompt #22: Get Design Critique
When to use: When you want objective feedback on visual design, UX flow, or information architecture.
Prompt:
Act as a senior UX/UI designer providing constructive design critique.
Design description:
[DESCRIBE YOUR DESIGN OR UPLOAD SCREENSHOT]
Design goals:
- [GOAL 1]
- [GOAL 2]
Critique the design for:
1. Visual hierarchy (is the most important content prominent?)
2. Usability (can users accomplish their goals easily?)
3. Accessibility (any obvious WCAG issues?)
4. Consistency (does it feel cohesive?)
5. Branding (does it align with brand personality?)
6. Edge cases (what happens with long text, empty states, errors?)
For each issue:
- Describe the problem
- Explain why it matters
- Suggest 2-3 possible solutions
- Indicate priority (must-fix vs. consider)
Be honest but constructive. Focus on user impact, not personal preference.
Honest take: AI critique is like feedback from a junior designer—competent but not inspired. It catches obvious problems (inconsistent spacing, weak hierarchy) but misses nuanced issues (emotional resonance, brand voice). Use it for a systematic first pass, not final judgment.
Tool note: Claude 4’s vision feature is killer for this. I can literally upload a screenshot and get detailed design feedback in seconds. It won’t replace human critique, but it helps me iterate faster.
Prompt #23: Prepare Stakeholder Presentations
When to use: When presenting design decisions to non-designers (executives, product managers, developers).
Prompt:
Act as a design strategist preparing a design presentation for [STAKEHOLDER TYPE].
Design decisions to present:
- [DECISION 1]
- [DECISION 2]
- [DECISION 3]
Audience concerns:
- [E.G., DEVELOPMENT COMPLEXITY, TIME TO MARKET, USER ADOPTION]
Create presentation content with:
1. Executive summary (2-3 sentence overview)
2. Problem statement (business impact of the problem)
3. Design solution (how the design addresses it)
4. Decision rationale (why you chose this approach)
5. User benefit (how this helps users)
6. Business benefit (how this helps the company)
7. Trade-offs (what you're sacrificing and why it's worth it)
8. Next steps (what happens next)
Emphasize business value, not design aesthetics. Use metrics where possible.
Why this works: Stakeholders don’t care about your typography choices. They care about business outcomes. AI helps me frame design decisions in business language, which gets designs approved faster.
Prompt #24: Document Design Decisions
When to use: When you need to record the rationale behind design choices for future reference.
Prompt:
Act as a design documentation specialist creating design decision records.
Decision: [WHAT YOU DECIDED]
Context: [PROJECT, FEATURE, TIMEFRAME]
Document:
1. Problem (what problem prompted this decision?)
2. Options considered (what alternatives were explored?)
3. Decision (what was chosen and why?)
4. Rationale (reasoning behind the choice)
5. Trade-offs (what are we accepting/rejecting?)
6. Success criteria (how will we know if this was right?)
7. Review date (when should this be reconsidered?)
Format as a decision record suitable for design system documentation or project archives.
Personal practice: I use this religiously now. Six months later, when someone asks “why did we design it this way?” I have a clear record instead of vague memories. It’s saved me from revisiting settled decisions dozens of times.
Prompt #25: Prioritize Design Iterations
When to use: When you have a long list of improvements and need to decide what to tackle first.
Prompt:
Act as a product designer prioritizing design improvements.
Improvements identified:
[LIST YOUR BACKLOG ITEMS WITH BRIEF DESCRIPTIONS]
Prioritize using:
1. User impact (how many users affected, severity of issue)
2. Business impact (effect on key metrics/goals)
3. Implementation effort (design + development time)
4. Dependencies (what must happen first)
Provide:
- High priority (must-do, high impact, reasonable effort)
- Medium priority (should-do, good ROI)
- Low priority (nice-to-have, low ROI or high effort)
- Deferred (not worth doing now)
For each item:
- Priority tier
- Impact score (1-10)
- Effort score (1-10)
- Rationale (why this priority)
Reality check: AI’s prioritization isn’t perfect—it doesn’t know your team’s capacity or strategic goals. But it gives you a structured framework to work from, which beats gut feeling prioritization.
Pro Tips: Customizing AI Prompts for Your Design Workflow
Generic prompts get you generic results. Here’s how I customize prompts for specific projects, clients, and design challenges.
Adding Design Context
Every project has constraints. The more context you give AI, the better the output:
Context to include:
- Brand personality: “Friendly and approachable” vs. “Professional and authoritative” leads to completely different outputs
- Target audience: Designing for Gen Z vs. enterprise IT admins requires different approaches
- Design system: Reference existing components, tokens, and patterns to maintain consistency
- Technical constraints: Mobile-first, progressive web app, accessibility level AA, etc.
Example context block:
Brand: Playful, approachable fintech for millennials
Audience: 25-35, tech-savvy, value transparency
Design system: Material Design 3 with custom color palette
Constraints: Mobile-first, WCAG AA, max 3-tap navigation depth
Add this to the start of any prompt, and outputs instantly become more relevant.
Choosing the Right AI Tool
Not all AI models are equal. Here’s my decision tree:
Use GPT-5 when:
- You need speed (responses in 2-3 seconds)
- The task is straightforward (persona generation, basic documentation)
- You’re iterating rapidly (trying different approaches)
Use Claude 4 Opus when:
- You have long documents (interview transcripts, design specs)
- You need deep analysis (comprehensive design critique)
- Accuracy matters more than speed
Use Claude 4 Sonnet when:
- You need vision capabilities (analyzing screenshots)
- You want a balance of speed and quality
- You’re working with design artifacts (Artifacts feature for prototypes)
Use Gemini 3 when:
- You need multimodal input (analyzing multiple screens simultaneously)
- You’re working with Google Workspace (tight integration)
Use Llama 4 when:
- Privacy is critical (healthcare, finance, confidential projects)
- You want to run locally without API costs
Common Mistakes to Avoid
Mistake #1: Trusting AI output without review
I once used an AI-generated accessibility report verbatim. It missed a critical keyboard trap issue that a five-minute manual test would’ve caught. Always verify.
Mistake #2: Not iterating on prompts
Your first prompt attempt will probably be mediocre. If the output isn’t useful, tweak the prompt. Add more context, rephrase the question, or ask for a different format.
Mistake #3: Replacing human judgment
AI is a tool, not a decision-maker. It can suggest five wireframe layouts, but only you know which aligns with strategic goals. Use AI to explore, humans to decide.
Mistake #4: Ignoring brand voice
AI-generated content sounds generic. You have to inject your brand personality—whether that’s playful, serious, quirky, or professional. Edit ruthlessly.
Real-World Results: AI in Design Practice
Let’s talk numbers. I tracked my time for 3 months before and after integrating AI into my workflow.
Time saved per week: 9.5 hours on average Fastest improvement: Research synthesis (from 6 hours to 45 minutes) Best ROI task: Design system documentation (from 12 hours to 1.5 hours) Least helpful: Visual design ideation (AI suggestions were too generic)
Iteration speed: I can now test 3-4 design directions in the time it used to take to explore one. That’s massive for client presentations where “give us options” is standard.
Accessibility compliance: 89% fewer WCAG issues caught in QA (because I catch them during design with AI checks). This alone saved two rounds of rework on recent projects.
Where AI still falls short:
- Understanding nuanced brand voice
- Recognizing emotional design impact
- Making strategic trade-off decisions
- Designing for delight (not just functionality)
These are the areas where human designers will always be essential. AI handles the systematic, pattern-based work. We handle the creative, empathetic, strategic work.
Frequently Asked Questions
Can AI replace UX/UI designers?
No, and it won’t anytime soon. AI is excellent at systematic tasks—generating frameworks, checking compliance, synthesizing data—but it lacks human empathy, strategic thinking, and creative problem-solving. What AI does is change what designers spend time on. Instead of manually processing interview data for hours, you spend 30 minutes reviewing AI synthesis and focus on strategic insights. Think of AI as an extremely capable intern who handles tedious work while you focus on high-value decisions.
Which AI tool is best for UX design work?
It depends on the task. For quick ideation and general prompts, GPT-5 or GPT-5-Mini are fast and affordable. For in-depth analysis (like processing 50 user interviews), Claude 4 Opus with its 200K context window is unmatched. For design critique involving screenshots, Claude 4 Sonnet’s vision feature is killer. And if privacy is critical (healthcare, finance), run Llama 4 locally. Most designers I know use 2-3 tools depending on the job.
How do I avoid generic AI-generated designs?
Add specific context to every prompt. Include your brand personality, target audience, design constraints, and strategic goals. Generic prompt = generic output. Specific prompt = useful starting point. Also, never ship AI output as-is. Use it to explore directions, then inject your brand voice, visual rhythm, and emotional resonance. AI gives you structure; you provide soul.
Is AI good for accessibility work?
Yes, for technical compliance checks. AI excels at checking color contrast ratios, identifying missing alt text, reviewing heading hierarchy, and flagging WCAG violations. However, it can’t replace testing with actual users who have disabilities. AI catches the systematic issues (contrast fails, missing labels), but real users reveal usability problems AI can’t predict (confusing navigation, overwhelming cognitive load). Use AI for first-pass checks, humans for validation.
Can ChatGPT help with user research?
Yes, for synthesis and pattern identification. If you have 20 user interview transcripts, AI can identify common themes, extract quotes, and generate personas in minutes. What AI can’t do is conduct empathetic interviews, read body language, or ask insightful follow-up questions. Use AI to process research data faster, but never let it replace talking to actual users.
How do I use AI ethically in design?
Three rules: (1) Always disclose when user-facing content is AI-generated, (2) Verify all claims and data before using them, and (3) Never use AI to replace actual user testing. AI is a tool for efficiency, not a replacement for human empathy and judgment. Also, be transparent with clients about your process—most appreciate faster iterations and don’t care if AI assisted, as long as the end result solves their problem.
What are the limitations of AI for UX design?
AI struggles with: (1) Understanding brand personality and emotional resonance, (2) Making strategic trade-offs (should we prioritize speed or feature richness?), (3) Designing for delight and surprise, (4) Reading contextual nuances (cultural differences, industry-specific patterns), and (5) Recognizing when to break design rules purposefully. AI optimizes for patterns it’s seen before. Breakthrough, innovative design still requires human creativity.
For more role-specific prompts beyond design, explore our full prompt collection with templates for developers, product managers, marketers, and more.
Conclusion
AI won’t replace designers, but it will fundamentally change what “doing design” looks like. The tedious parts—research synthesis, accessibility checks, documentation—become fast and systematic. That frees you to focus on the parts AI can’t do: strategic thinking, creative problem-solving, and designing experiences that resonate emotionally.
Start small. Pick 2-3 prompts from this guide that address your biggest time sinks. Try them on your next project. Customize them for your workflow. Within a month, you’ll wonder how you ever worked without them.
The designers thriving in 2026 aren’t the ones resisting AI—they’re the ones using it as a creative partner. AI explores options, you make decisions. AI checks compliance, you ensure empathy. AI documents systems, you build delightful experiences.
The future of design is human creativity amplified by AI speed. Ready to 10x your design productivity? Try these prompts this week and see what becomes possible.