Skip to main content

UX Metrics Cheatsheet

A quick reference for the metrics most commonly used to measure user experience. Understand what each metric tells you, how to collect it, and when to use it.

Last updated: July 2025


Behavioral metrics

What users actually do

Task success rate

What it measures: Whether users can complete tasks.

Calculation: (Successful completions / Total attempts) × 100

Collection method: Usability testing, analytics event tracking

When to use:

  • Evaluating task flow effectiveness
  • Comparing design alternatives
  • Establishing baseline for critical tasks

Benchmark: 78% is often cited as average; aim for 90%+ on critical tasks.


Task completion time

What it measures: How long it takes to complete a task.

Calculation: Time from task start to completion

Collection method: Usability testing (observed), analytics (for instrumented flows)

When to use:

  • Measuring efficiency improvements
  • Comparing before/after designs
  • Identifying bottlenecks

Considerations: Time includes errors and recovery. Successful-only time may be more meaningful.


Error rate

What it measures: Frequency of user mistakes.

Calculation: (Number of errors / Total opportunities for error) × 100

Collection method: Usability testing observation, form analytics, error logging

When to use:

  • Identifying confusing UI elements
  • Evaluating form designs
  • Measuring impact of changes

Types:

  • Critical errors (prevent completion)
  • Non-critical errors (cause delays but recoverable)

Findability / Success rate

What it measures: Whether users can locate information or features.

Calculation: (Users who found target / Total users) × 100

Collection method: Tree testing, usability testing, first-click testing

When to use:

  • Evaluating navigation and information architecture
  • Testing search effectiveness
  • Validating content organization

Combine metrics

Single metrics rarely tell the complete story. Task success + time + satisfaction together reveal whether users can do something, how efficiently, and how they feel about it.


Attitudinal metrics

What users think and feel

System Usability Scale (SUS)

What it measures: Perceived overall usability.

Calculation: 10-question survey, scored 0-100 (not a percentage)

Collection method: Post-task or post-study questionnaire

When to use:

  • Benchmark usability over time
  • Compare products or versions
  • Quick standardized assessment

Benchmarks:

  • Above 68 = above average
  • Above 80 = good
  • Above 90 = excellent

Questions include: Ease of use, complexity, consistency, learnability


Net Promoter Score (NPS)

What it measures: Likelihood to recommend.

Calculation: % Promoters (9-10) - % Detractors (0-6)

Collection method: Single survey question

When to use:

  • Tracking overall satisfaction over time
  • Comparing to competitors
  • Executive communication (widely understood)

Limitations: Doesn't explain why users feel that way. Follow with "why" questions.


Customer Satisfaction (CSAT)

What it measures: Satisfaction with specific interactions.

Calculation: Typically average of 1-5 or 1-7 scale ratings

Collection method: Post-interaction survey

When to use:

  • Evaluating specific features or touchpoints
  • Support interaction quality
  • Transaction experiences

Task-level satisfaction

What it measures: How easy users felt a task was.

Calculation: Single Ease Question (SEQ) on 1-7 scale

Collection method: Asked immediately after task

When to use:

  • Correlating with behavioral metrics
  • Identifying friction points
  • Comparing task difficulty across features

Benchmark: 5.5+ is typically considered good


Engagement metrics

How users engage with the product

Adoption / Feature usage

What it measures: Whether features are being used.

Calculation: (Users who used feature / Total active users) × 100

Collection method: Analytics

When to use:

  • Evaluating feature discovery
  • Identifying unused features
  • Measuring launch success

Retention / Return rate

What it measures: Whether users come back.

Calculation: (Users active in period N who were also active in period N-1) / (Users active in period N-1) × 100

Collection method: Analytics

When to use:

  • Measuring sustained value
  • Evaluating onboarding effectiveness
  • Tracking long-term engagement

Cohort analysis: Track specific user groups over time for more insight.


Frequency of use

What it measures: How often users return.

Calculation: Sessions per user per time period

Collection method: Analytics

When to use:

  • Understanding usage patterns
  • Identifying power users
  • Evaluating engagement initiatives

Time on task / page / session

What it measures: Duration of engagement.

Calculation: Varies by metric

Collection method: Analytics

Caution: Longer isn't always better. Long time might indicate engagement OR confusion. Context matters.


Outcome metrics

Business results of UX

Conversion rate

What it measures: Users who complete a desired action.

Calculation: (Conversions / Total visitors) × 100

Collection method: Analytics

When to use:

  • E-commerce and sign-up flows
  • Landing page optimization
  • Funnel analysis

Abandonment rate

What it measures: Users who start but don't finish.

Calculation: (Incomplete processes / Started processes) × 100

Collection method: Analytics funnel tracking

When to use:

  • Form and checkout optimization
  • Identifying friction points
  • Prioritizing improvements

Support contact rate

What it measures: How often users need help.

Calculation: (Support contacts / Total users or transactions) × 100

Collection method: Support ticketing system

When to use:

  • Measuring self-service effectiveness
  • Identifying confusing areas
  • Cost of poor UX

Comparative metrics

Before/after, A/B

Change in metrics

For any metric, track:

  • Baseline: Metric before change
  • Post-change: Metric after change
  • % Change: ((New - Old) / Old) × 100

Statistical significance

Ensure differences are real, not random variation:

  • Adequate sample size
  • Confidence level (typically 95%)
  • Effect size worth pursuing

Choosing the right metrics

Research questionPrimary metrics
Can users do this?Task success rate
How efficient is this?Task time, error rate
Is this easy to learn?Time on first vs. subsequent use
How do users feel?SUS, satisfaction, NPS
Is this discoverable?Findability, first-click
Are users engaged?Retention, frequency, feature usage
Is this working for business?Conversion, abandonment
Triangulate

Use multiple metrics together. If task success is high but satisfaction is low, something's wrong even though users "succeed." If satisfaction is high but conversion is low, there may be issues outside UX.


Quick reference card

MetricTypeGood for
Task successBehavioralCan they do it?
Task timeBehavioralEfficiency
Error rateBehavioralUsability issues
SUSAttitudinalOverall usability perception
NPSAttitudinalLoyalty/satisfaction
SEQAttitudinalPer-task satisfaction
AdoptionEngagementFeature discovery
RetentionEngagementSustained value
ConversionOutcomeBusiness results
AbandonmentOutcomeFriction identification