performance-regression-debugging

安装量: 118
排名: #7225

安装

npx skills add https://github.com/aj-geddes/useful-ai-prompts --skill performance-regression-debugging

Performance Regression Debugging Overview

Performance regressions occur when code changes degrade application performance. Detection and quick resolution are critical.

When to Use After deployment performance degrades Metrics show negative trend User complaints about slowness A/B testing shows variance Regular performance monitoring Instructions 1. Detection & Measurement // Before: 500ms response time // After: 1000ms response time (2x slower = regression)

// Capture baseline metrics const baseline = { responseTime: 500, // ms timeToInteractive: 2000, // ms largestContentfulPaint: 1500, // ms memoryUsage: 50, // MB bundleSize: 150 // KB gzipped };

// Monitor after change const current = { responseTime: 1000, timeToInteractive: 4000, largestContentfulPaint: 3000, memoryUsage: 150, bundleSize: 200 };

// Calculate regression const regressions = {}; for (let metric in baseline) { const change = (current[metric] - baseline[metric]) / baseline[metric]; if (change > 0.1) { // >10% degradation regressions[metric] = { baseline: baseline[metric], current: current[metric], percentChange: (change * 100).toFixed(1) + '%', severity: change > 0.5 ? 'Critical' : 'High' }; } }

// Results: // responseTime: 500ms → 1000ms (100% slower = CRITICAL) // largestContentfulPaint: 1500ms → 3000ms (100% slower = CRITICAL)

  1. Root Cause Identification Systematic Search:

Step 1: Identify Changed Code - Check git commits between versions - Review code review comments - Identify risky changes - Prioritize by likelyhood

Step 2: Binary Search (Bisect) - Start with suspected change - Disable the change - Re-measure performance - If improves → this is the issue - If not → disable other changes

git bisect start git bisect bad HEAD git bisect good v1.0.0 # Test each commit

Step 3: Profile the Change - Run profiler on old vs new code - Compare flame graphs - Identify expensive functions - Check allocation patterns

Step 4: Analyze Impact - Code review the change - Understand what changed - Check for O(n²) algorithms - Look for new database queries - Check for missing indexes


Common Regressions:

N+1 Query: Before: 1 query (10ms) After: 1000 queries (1000ms) Caused: Removed JOIN, now looping Fix: Restore JOIN or use eager loading

Missing Index: Before: Index Scan (10ms) After: Seq Scan (500ms) Caused: New filter column, no index Fix: Add index

Memory Leak: Before: 50MB memory After: 500MB after 1 hour Caused: Listener not removed, cache grows Fix: Clean up properly

Bundle Size: Before: 150KB gzipped After: 250KB gzipped Caused: Added library without tree-shaking Fix: Use lighter alternative or split

Algorithm Efficiency: Before: O(n) = 1ms for 1000 items After: O(n²) = 1000ms for 1000 items Caused: Nested loops added Fix: Use better algorithm

  1. Fixing & Verification Fix Process:

  2. Understand the Problem

  3. Profile and identify exactly what's slow
  4. Measure impact quantitatively
  5. Understand root cause

  6. Implement Fix

  7. Make minimal changes
  8. Don't introduce new issues
  9. Test locally first
  10. Measure improvement

  11. Verify Fix

  12. Run same measurement
  13. Check regression gone
  14. Ensure no new issues
  15. Compare metrics

Before regression: 500ms After regression: 1000ms After fix: 550ms (acceptable, minor overhead)

  1. Prevent Recurrence
  2. Add performance test
  3. Set performance budget
  4. Alert on regressions
  5. Code review for perf

  6. Prevention Measures Performance Testing:

Baseline Testing: - Establish baseline metrics - Record for each release - Track trends over time - Alert on degradation

Load Testing: - Test with realistic load - Measure under stress - Identify bottlenecks - Catch regressions

Performance Budgets: - Set max bundle size - Set max response time - Set max LCP/FCP - Enforce in CI/CD

Monitoring: - Track real user metrics - Alert on degradation - Compare releases - Analyze trends


Checklist:

[ ] Baseline metrics established [ ] Regression detected and measured [ ] Changed code identified [ ] Root cause found (code, data, infra) [ ] Fix implemented [ ] Fix verified [ ] No new issues introduced [ ] Performance test added [ ] Budget set [ ] Monitoring updated [ ] Team notified [ ] Prevention measures in place

Key Points Establish baseline metrics for comparison Use binary search to find culprit commits Profile to identify exact bottleneck Measure before/after fix Add performance regression tests Set and enforce performance budgets Monitor production metrics Alert on significant degradation Document root cause Prevent through code review

返回排行榜