All articles
guides

How to Quantify Your Impact When Your Work Feels Unquantifiable

Promotion committees want numbers. But your best work might be mentoring, reliability, or reducing tech debt. Here is how to quantify impact that does not come with a built-in dashboard.

February 23, 202610 min read

Committees do not promote people who "worked on" things. They promote people who moved numbers. But what if your best work does not come with a built-in dashboard?

If you spent 6 months improving reliability, mentoring three junior engineers, cleaning up tech debt, or building internal tooling, you probably have a nagging feeling that your work matters but you cannot prove it. You sit down to write your self-assessment and end up with vague phrases like "improved system reliability" and "contributed to team velocity."

That vagueness is what gets strong candidates deferred. The work was real. The impact was real. But the committee cannot evaluate what they cannot measure. And they spend an average of 3 to 5 minutes on your packet - so they will not hunt for the impact themselves.

Here is a framework for turning any type of work into measurable promotion evidence, with concrete before-and-after examples for the most common "hard to quantify" categories.

The "So What" Chain: From Activity to Business Impact

Every piece of work, no matter how abstract, connects to a business outcome. The problem is that most people stop at the first link in the chain. Here is the full chain:

  1. Activity - what you did. "Refactored the authentication service."
  2. Output - what the activity produced. "Reduced the codebase by 4,000 lines and consolidated 3 auth providers into 1."
  3. Outcome - what changed because of the output. "New feature onboarding time dropped from 2 weeks to 3 days because developers no longer need to understand 3 separate auth flows."
  4. Business impact - why the outcome matters to the company. "Engineering velocity on auth-dependent features increased by 35%, enabling the team to ship the payments integration 6 weeks ahead of schedule."

Most people write "Refactored the authentication service" on their self-assessment and call it done. That is link 1 of 4. The committee reads it and thinks "so what?"

The fix is to keep asking "so what?" after every statement until you reach a number the business cares about: revenue, users, velocity, cost savings, uptime, or customer satisfaction.

Free Promotion Packet Template

The same structure used at Google, Meta, Amazon, and Microsoft. Yours in 30 seconds.

No spam. Unsubscribe anytime.

Framework: Quantifying Infrastructure Work

Infrastructure work is notoriously hard to quantify because the whole point is that things run invisibly. But invisible does not mean unmeasurable.

The DORA Metrics Framework

The DevOps Research and Assessment (DORA) framework, developed by Nicole Forsgren, Jez Humble, and Gene Kim and later acquired by Google, provides four industry-standard metrics for measuring delivery performance:

  • Deployment frequency - how often you ship to production
  • Lead time for changes - time from commit to production
  • Change failure rate - percentage of deployments that cause issues
  • Mean time to recovery (MTTR) - how fast you fix production incidents

Before and After Example

Weak: "Improved our deployment pipeline."

Strong: "Redesigned the CI/CD pipeline for the payments service, reducing deploy time from 45 minutes to 8 minutes. Deployment frequency increased from 2x/week to 3x/day. Change failure rate dropped from 18% to 4%. The team shipped 40% more features in Q3 compared to Q2 with the same headcount."

Where to Find the Numbers

  • CI/CD dashboards (GitHub Actions, Jenkins, CircleCI run history)
  • Deployment logs and release frequency reports
  • Incident management tools (PagerDuty, Opsgenie - track MTTR trends)
  • Sprint velocity before and after your changes
  • Feature release dates compared to original timelines

Framework: Quantifying Reliability and On-Call Work

Reliability work prevents disasters. The challenge: committees do not reward you for things that did not happen. You have to make the invisible visible.

Metrics That Work

  • Uptime improvement - "Increased service availability from 99.5% to 99.95%, eliminating approximately 4.4 hours of downtime per month for 2M daily active users."
  • Incident reduction - "Reduced P1/P2 incidents by 60% (from an average of 5 per month to 2 per month) by implementing automated canary deployments and circuit breakers."
  • MTTR improvement - "Reduced mean time to recovery from 90 minutes to 12 minutes by building an automated rollback system, saving an estimated $45K per incident in engineering time and lost revenue."
  • On-call burden reduction - "Reduced after-hours pages by 75% (from 12 per week to 3 per week) by resolving the top 10 recurring alert sources, improving team morale and reducing on-call burnout."

Before and After Example

Weak: "Improved system reliability and reduced on-call burden."

Strong: "Led a 3-month reliability initiative for the search indexing pipeline. Reduced P1 incidents from 8 to 1 per quarter. Reduced on-call pages by 70%. Improved SLO compliance from 97.2% to 99.8%, exceeding the 99.5% target set by the infrastructure org."

Framework: Quantifying Mentoring and People Development

At the Senior level and above, committees explicitly evaluate your multiplier effect - how you make others more effective. But "I mentored people" is not evidence. You need to quantify the outcomes of your mentorship.

Metrics That Work

  • Ramp time reduction - "Designed an onboarding program for the data pipeline team that reduced new hire time-to-first-commit from 4 weeks to 5 days. Applied to 6 new hires in 2025."
  • Mentee outcomes - "Mentored 3 junior engineers through their first independent project launches. All 3 received Exceeds Expectations ratings in the subsequent review cycle."
  • Knowledge sharing reach - "Created a 12-part internal training series on distributed systems design. 85 engineers attended across 4 teams. Post-training survey showed 4.7/5 average rating and a 40% reduction in architecture review iterations for attendees."
  • Hiring contribution - "Conducted 45 interviews in 2025 (top 5% of interviewers by volume). Calibrated hiring bar document adopted by 3 other teams, standardizing evaluation criteria for backend roles."

Before and After Example

Weak: "Mentored junior engineers and helped with onboarding."

Strong: "Mentored 4 engineers (2 junior, 2 mid-level), resulting in 2 promotions and 1 successful transfer to a senior role. Reduced average onboarding time by 50% through a structured 30-60-90 program now used as the team template."

Framework: Quantifying Tech Debt and Code Quality Work

Tech debt work is the classic "thankless task." Everyone benefits, but nobody notices until it is done. Here is how to make it visible.

Metrics That Work

  • Developer velocity - "Refactored the legacy billing module, reducing the average time to add a new payment method from 3 weeks to 2 days. Enabled the team to launch Apple Pay and Google Pay in the same quarter instead of sequentially."
  • Bug rate reduction - "Migrated the user service from hand-rolled SQL to an ORM with type-safe queries. Production bugs in that service dropped from 8 per month to 1 per month. Customer-reported billing errors dropped by 85%."
  • Build time savings - "Reduced CI build time from 35 minutes to 8 minutes by parallelizing the test suite and introducing incremental builds. Across 15 engineers running an average of 4 builds per day, this saves roughly 27 engineering hours per week."
  • Migration scope - "Led the migration of 200+ microservices from Python 2 to Python 3, coordinating across 8 teams over 4 months. Zero production incidents during migration. Unblocked the org from adopting 3 critical Python 3-only libraries."

The "Engineering Hours Saved" Formula

When you improve developer experience, you can calculate impact with this formula:

(time saved per occurrence) x (frequency) x (number of engineers affected) = total time saved

Example: You reduce a manual deployment step from 20 minutes to 0 (automated it). That step happens 3 times per day. 12 engineers are affected. 20 min x 3/day x 12 engineers = 720 minutes per day = 12 hours of engineering time saved daily, or roughly 60 hours per week. At a fully loaded engineering cost of $150/hour, that is $9,000/week in recovered productivity.

Framework: Quantifying Process Improvements

Process work - improving code review practices, design doc standards, incident response procedures - creates leverage across the entire team. The key is measuring adoption and downstream outcomes.

Before and After Example

Weak: "Introduced a new code review process for the team."

Strong: "Designed and rolled out a structured code review checklist adopted by 4 teams (35 engineers). Average review turnaround time dropped from 2 days to 4 hours. Post-deploy bug rate decreased by 30% in the first quarter after adoption. The checklist was later adopted org-wide by the engineering excellence team."

More Examples

  • Design doc process: "Created a design doc template and review process that reduced architecture review meetings from 90 minutes to 30 minutes. 80% of design docs now pass review on the first iteration, up from 35%."
  • Incident response: "Rebuilt the incident response playbook for the platform team. Time to classify and route P1 incidents dropped from 25 minutes to 5 minutes. Customer communication SLA compliance improved from 60% to 95%."
  • Sprint planning: "Introduced estimation calibration sessions that improved sprint forecast accuracy from 45% to 78%. The product team was able to commit to customer-facing launch dates with higher confidence, reducing deadline slippage by 50%."

Framework: Quantifying Product Feature Work

Product features are the easiest type of work to quantify - in theory. In practice, most people still describe features in terms of what they built rather than what happened because of what they built.

Metrics That Work

  • User adoption - "Launched the in-app notification system. 70% of daily active users opted in within the first month. Push notification engagement rate: 12% (2x industry average of 6%)."
  • Revenue impact - "Built the recommendation engine for the checkout flow. A/B test showed a 15% increase in average order value. Projected annual revenue impact: $2.4M."
  • Retention - "Designed and shipped the new onboarding flow. Day-7 retention improved from 32% to 41% (28% relative improvement). Estimated impact: 18,000 additional retained users per month."
  • Efficiency - "Built the internal admin tool for the support team. Average ticket resolution time dropped from 15 minutes to 4 minutes. Support team handled 40% more tickets per day without additional headcount."

Before and After Example

Weak: "Built the search feature for the mobile app."

Strong: "Led design and implementation of mobile search, serving 500K+ queries per day. Reduced average time-to-result from 8 seconds (browsing) to 1.2 seconds (search). Search users convert at 2.5x the rate of browse-only users. Feature contributed an estimated $800K in incremental quarterly revenue."

Where to Find Numbers When You Think There Are None

The most common excuse for not quantifying impact is "I do not have access to the data." Here are 10 places to look that most people overlook:

  1. Git history - number of files changed, PRs merged, lines modified. Not impressive alone, but useful for scope: "touched 150 files across 12 services."
  2. CI/CD dashboards - build times, deployment frequency, failure rates over time.
  3. Monitoring tools - Datadog, Grafana, CloudWatch. Pull latency graphs, error rate trends, SLO dashboards before and after your changes.
  4. Incident logs - PagerDuty, Opsgenie, or your postmortem archive. Count incidents, measure MTTR, track recurring issues you resolved.
  5. Jira/Linear/Asana - sprint velocity, cycle time, ticket throughput. Compare team velocity before and after process changes.
  6. Product analytics - Amplitude, Mixpanel, Google Analytics. Feature adoption, user engagement, funnel conversion rates.
  7. Slack search - search for your name or your project name. Count times people asked questions you answered, thanked you for help, or referenced your documentation.
  8. Internal surveys - developer experience surveys, onboarding surveys, tooling satisfaction scores.
  9. Your manager's reports - ask your manager for team OKR dashboards. Your work likely contributed to metrics they track at the team level.
  10. Business metrics - revenue dashboards, customer satisfaction (NPS/CSAT), support ticket volumes. Even if you did not directly move these numbers, your infra/tooling/platform work often correlates with improvements visible here.

The Quantification Cheat Sheet

When you are stuck, use this cheat sheet to find the right metric for your work:

  • Speed: How much faster did something become? (Latency, build time, deploy time, time-to-market, review turnaround)
  • Volume: How much more could we handle? (Requests per second, users served, tickets resolved, features shipped)
  • Quality: How much better did something get? (Error rate, bug count, SLO compliance, test coverage, customer satisfaction)
  • Cost: How much did we save? (Infrastructure costs, engineering hours, vendor spend, support overhead)
  • Reach: How many people or systems were affected? (Teams adopting, engineers using, services migrated, customers impacted)
  • Risk: What bad outcomes did we prevent? (Downtime avoided, security vulnerabilities patched, compliance gaps closed)

Every piece of work maps to at least one of these dimensions. If you cannot find a number, you are probably stuck at link 1 of the "So What" chain. Keep asking "so what?" until you reach something measurable.

Common Mistakes When Quantifying Impact

1. Claiming Full Credit for Team Outcomes

If your team's revenue grew 50% and you were one of 8 engineers, do not claim the 50%. Committees see through this. Instead, isolate your contribution: "I built the recommendation engine that drove a 15% increase in conversion rate, one of 3 key factors in the team's 50% revenue growth."

2. Using Vanity Metrics

"Wrote 50,000 lines of code" or "merged 200 PRs" are activity metrics, not impact metrics. Committees do not care how much you did - they care what changed because of what you did.

3. Being Precise About the Wrong Things

"Reduced latency by 23.7%" is unnecessarily precise and looks like you are cherry-picking. "Reduced latency by roughly 25%" is more credible. Save precision for numbers that are naturally exact, like user counts or dollar amounts.

4. Forgetting to Include Scope and Difficulty

Numbers without context are misleading. "Reduced build time by 80%" means something very different for a 2-person project versus a 200-engineer monorepo. Always include the scale: how many engineers, services, users, or dollars were affected.

For more on what committees evaluate and how they process your evidence, read what promotion committees actually look for.

Stop Underselling Your Work. Start Quantifying It.

You already did the hard part - the actual work. The missing piece is translating it into the language committees speak: specific numbers, clear business impact, and evidence mapped to promotion criteria.

GetPromoted does this translation for you. Answer a few questions about your work in a guided 10-minute interview. The AI extracts your impact, finds the right metrics, and structures everything into a promotion packet format that committees at Google, Meta, Amazon, and Microsoft recognize immediately.

Preview it free. Pay $99 $79 only if it captures your impact better than you could write it yourself. 100% money-back guarantee. A career coach charges $500+ and takes 3-4 weeks. This takes 10 minutes.

Most companies run promo cycles in Q1 and Q3. Do not miss this window.

Stop Reading. Start Building.

Your promotion packet, written in 10 minutes. Free to preview.$99$79 only if you're happy. 100% money-back guarantee.

Build My Promotion Packet