What Promotion Committees Actually Look For (From People Who Have Served on Them)
The number one reason promotions get rejected is not weak work. It is insufficient evidence of impact. Here is what committee members are actually looking for in your packet.
The number one reason promotions get rejected is not weak work. It is insufficient evidence of impact in the promotion document. The committee literally does not have what it needs to say yes.
That distinction matters because the fix is completely different. If the problem were your work quality, you would need to become a better engineer. If the problem is your evidence - and it almost always is - you need to become a better writer. Specifically, you need to write the document that a committee of strangers can evaluate in 3 to 5 minutes and walk away convinced you are already operating at the next level.
Here is exactly how that evaluation works at Google, Meta, and Amazon - the three companies with the most structured (and most studied) promotion processes in tech. And here are the 5 most common reasons promotions get rejected, along with what tips a borderline case from "no" to "yes."
How Promotion Committees Actually Work
Google: The Independent Committee Model
At Google, your promotion is decided by a committee of people who have never worked with you. This is by design - it eliminates favoritism and forces everything to be evaluated based on written evidence.
The process works like this:
- You (or your manager) prepare a promotion packet with evidence of your impact, mapped to the criteria for the next level.
- Peer reviewers submit written feedback. Your manager summarizes the feedback for the committee - the committee typically does not see the raw peer reviews.
- Under the newer GRAD (Googler Reviews and Development) system, your manager is now part of the committee and discusses your packet. Previously, the manager was excluded entirely.
- The committee reviews the packet, asks questions, and decides yes or no. Multiple candidates are reviewed in a single session - often 10 or more.
Google runs two promotion cycles per year, roughly in March and September. The critical insight: because the committee members are strangers to your work, the quality of your written packet is everything. There is no "well, we know their work is actually good" fallback. If it is not in the document, it does not exist.
Meta: The Manager-Driven Calibration Model
Meta uses a Performance Summary Cycle (PSC) that combines 360-degree feedback with manager-driven calibration. The process is different from Google in a critical way: your manager is the primary advocate for your promotion.
- You write a self-review. You request peer feedback from 3 to 5 colleagues.
- Your manager reads all the feedback, writes their assessment of your performance, and proposes a rating.
- Managers enter calibration meetings where they present and defend their reports. Each engineering level has a separate calibration. Senior ICs may also provide input.
- Ratings go up the chain for VP approval. There is not a formal "forced curve," but there are firm guidelines for what percentage of employees should be at each rating level. The top and bottom tiers are each roughly 2% of the org.
Because your manager presents your case in calibration, giving them a structured, evidence-rich self-assessment is critical. Your manager is essentially reading from your document while defending your case against other managers who are all advocating for their own reports.
Amazon: The Narrative Document Model
Amazon's promotion process centers on a written "promo doc" - a detailed narrative that follows Amazon's famously rigorous documentation standards. This is reviewed during the Organization and Leadership Review (OLR).
- Your manager writes a promotion document (though in practice you should draft it yourself and give it to them). For a promotion from SDE II to SDE III (L5 to L6), this document is typically 15+ pages. Even an SDE I to SDE II promotion requires 5+ pages.
- You need a minimum of 4 peer feedback responses, with 6+ recommended. Crucially, Amazon recommends seeking feedback from people at or above the level you are being promoted to.
- The document goes to a calibration panel of managers, peers, and Bar Raisers who debate the merits of each candidate.
- Impact must be framed through Amazon's 16 Leadership Principles. As of 2025, Amazon formally embedded LP adherence as a core metric in reviews, with only 5% of employees eligible for the top "role model" grade.
Amazon's process is arguably the most documentation-intensive in tech. The quality and depth of your promo doc is the single biggest factor in whether you get promoted.
Free Promotion Packet Template
The same structure used at Google, Meta, Amazon, and Microsoft. Yours in 30 seconds.
No spam. Unsubscribe anytime.
The 3-Minute Rule: Why Your Document Matters More Than Your Work
Here is the uncomfortable math of how promotion decisions get made:
- You spent 12 months doing the work.
- Your manager spent 1 to 2 hours preparing your case.
- The committee spends approximately 3 to 5 minutes evaluating your entire candidacy.
At Google, where the hiring committee reviews roughly 10 candidates per session, each review ranges from 3 to 20 minutes depending on the complexity and level of consensus. For straightforward cases - either clearly yes or clearly no - the review can be as short as 3 minutes. For contested cases, it might stretch to 15 to 20 minutes.
But most candidates are not contested. Most candidates are evaluated quickly. Which means the first 30 seconds of your packet - the executive summary - determines whether the committee leans toward "yes" or "let us look for reasons to say no."
This is not cynicism. It is the structural reality of reviewing dozens of candidates in a single session. Committee members are human. They have limited time and attention. A clear, well-structured packet that leads with impact gets a favorable first impression. A vague, rambling packet that buries the key evidence on page 3 starts in a hole.
The Top 5 Reasons Promotions Get Rejected
Based on how committees operate at Google, Meta, Amazon, and Microsoft, here are the 5 most common rejection reasons - and an estimate of how frequently each one drives the decision:
1. Insufficient Evidence of Impact (~35% of rejections)
This is the most common reason by a significant margin. The candidate did strong work, but the packet either does not include quantified impact or describes the work in terms of activities rather than outcomes.
What the committee sees: "Worked on the migration to the new storage system."
What they need to see: "Led the migration of 50+ services to the new storage system, reducing storage costs by $1.2M/year and improving read latency by 40% across all affected services."
The fix is not doing more work - it is documenting the work you already did with specific numbers. For a full framework on quantifying impact, read how to quantify your impact when it feels unquantifiable.
2. Scope Not at the Next Level (~25% of rejections)
The candidate is executing exceptionally well - but at their current level. They are the best L5 on the team, but there is no evidence of operating at L6 scope.
Each level has a specific scope expectation. At Google, the jump from L5 to L6 requires evidence of multi-team scope and technical leadership. At Amazon, moving from L5 to L6 (SDE II to SDE III) requires demonstrated ownership of "significant, increasingly complex systems or components."
The committee is not looking at whether you did your current job well. They are looking at whether you are already doing the next job. This is a subtle but critical distinction.
3. Weak or Missing Peer Evidence (~15% of rejections)
Peer reviews are one of the strongest signals committees rely on - often more than manager advocacy. When peer reviews are vague ("great to work with, highly recommend"), the committee has nothing to anchor against the criteria.
At Google, the committee reviews peer feedback as summarized by the manager. At Meta, peer reviews are part of the 360 feedback package that goes into calibration. At Amazon, you need at minimum 4 peer feedback responses, and they should ideally come from people at or above your target level.
The fix: guide your peer reviewers. Instead of "can you write a peer review for me?" say "I am going for promotion to [level]. Could you speak to my work on [specific project] and its impact on [specific area]? It would help to address [specific criterion] from the promotion rubric."
4. Timing and Organizational Factors (~15% of rejections)
Sometimes the candidate is ready, but the circumstances are wrong: the biggest project shipped too recently for results to be measurable, the team is mid-reorg and the manager does not have the political capital to push a promotion, or the promotion budget is constrained.
Data from Pave shows that the average promotion rate across US tech companies dropped to 3.8% in 2024, down from 5.2% in 2023. Engineering promotion rates specifically sit at 3.7%. The technology sector saw a 42% decline in promotion rates since 2022. This is not about individual performance - it is a structural headwind that makes timing even more important.
For a detailed calendar of when to submit at each major company, read when to ask for a promotion.
5. Perception Gap - Impact Not Visible (~10% of rejections)
This one is especially painful: the candidate did the work and has the impact, but nobody outside their immediate team knows about it. At Google, committee members are often from different orgs - they have no context about your project unless you provide it. At Meta, calibration involves managers from across the org comparing candidates they may never have interacted with.
If your impact is invisible, it is functionally the same as no impact in the committee room. The solution is not self-promotion - it is documentation. Write design docs. Send project updates. Present at tech talks. Create artifacts that travel beyond your team so that when your name comes up in calibration, at least some people in the room have context.
What Evidence Carries the Most Weight
Not all evidence is equal. Here is a rough ranking of what carries the most weight in promotion committees, from strongest to weakest:
- Quantified business impact (highest weight). Revenue generated, costs saved, users impacted, latency reduced, incidents prevented - anything with a dollar sign, percentage, or user count. This is the gold standard because it is objective and directly tied to outcomes the company cares about.
- Cross-team or organizational impact. Evidence that your work affected teams, systems, or outcomes beyond your immediate group. This is the primary signal for promotions above the Senior level.
- Strong, specific peer endorsements. A peer review that says "they drove the technical approach for our cross-team migration, and without their leadership we would not have shipped on time" is powerful. A peer review that says "great engineer, would recommend for promotion" is noise.
- Manager advocacy with specific examples. Your manager saying "they are ready" is necessary but not sufficient. Your manager saying "they independently identified the caching bottleneck, proposed a solution that 3 teams adopted, and reduced p99 latency by 60%" is both.
- Sustained performance over time. Committees are wary of one-quarter wonders. They want to see consistency across 2+ review periods. Amazon explicitly states that consistency is key: you should be operating at 80%+ of the next level's guidelines before you have a real case.
- Activity metrics (lowest weight). Lines of code, PRs merged, bugs fixed, tickets closed. These show effort, not impact. They are supporting evidence at best and filler at worst.
Common Misconceptions About the Process
Misconception 1: "My manager decides if I get promoted."
At Google, your manager is one voice on a committee. At Meta, your manager advocates for you, but the calibration group makes the final call with VP approval. At Amazon, the promo doc goes through a formal review panel. In all cases, your manager is necessary but not sufficient. The evidence in the document is what decides.
Misconception 2: "Working harder will get me promoted faster."
Working harder at your current level makes you a better [current level] engineer. Getting promoted requires demonstrating next-level behavior. These are different things. The engineer who ships 50% more code but never influences another team is working harder. The engineer who ships 20% less code but creates a framework that makes 10 other engineers faster is working at the next level.
Misconception 3: "The committee already knows my work."
At Google, the committee members have likely never met you. At Meta, the calibration group includes managers from other teams. At Amazon, Bar Raisers may be from entirely different organizations. Nobody in that room has the full context of your daily work. They have a document and 3 to 5 minutes. Write accordingly.
Misconception 4: "I just need to do the work and the promotion will come."
This is the most expensive misconception in tech careers. The promotion will not "come." You have to build the case, document the evidence, guide your peer reviewers, arm your manager, and time your submission to the cycle. Promotion is a project in itself, and treating it as one is the single biggest thing you can do to improve your odds.
Misconception 5: "If I got deferred, my work was not good enough."
Deferral usually means the evidence was not strong enough, not that the work was weak. This is good news - it means you probably do not need to change what you do, just how you document it. For strategies on recovering from a deferral, read what actually happens when you get passed over.
What Tips a Borderline Case from "No" to "Yes"
Most candidates who get rejected are not clearly unqualified. They are borderline. The committee could go either way. Here is what makes the difference for those close calls:
1. A Killer Executive Summary
The first 3 to 4 sentences of your packet set the frame. If the executive summary clearly states what level you are operating at, names the scope of your impact, and includes 1 to 2 headline metrics, the committee starts from a positive position. If the summary is generic ("I have been a productive member of the team and contributed to multiple projects"), the committee starts skeptical.
2. One Undeniable Accomplishment
Every strong promotion packet has at least one accomplishment that makes the committee say "that is clearly next-level work." It does not need to be the biggest project - it needs to be the clearest example of next-level scope, impact, and autonomy. Make sure that one story is told completely: context, what you did, how you did it differently than someone at your current level would have, and the measurable result.
3. Aligned Peer Reviews
When the committee reads 3 peer reviews that independently mention the same qualities - "drives technical direction," "influences teams beyond their own," "multiplier for the org" - it creates a pattern they can trust. Misaligned peer reviews ("great coder" vs. "emerging leader" vs. "helpful teammate") create noise and uncertainty.
4. Manager Confidence
In calibration, other managers can tell when your manager is genuinely confident versus doing a favor. A confident manager has specific data points memorized, can answer pushback questions, and can articulate why this candidate is different from the last person at this level who got deferred. That confidence comes from having reviewed your packet early - ideally 4 to 6 weeks before calibration.
5. Evidence of Trajectory, Not Just Snapshots
A borderline candidate with a clear growth trajectory gets the benefit of the doubt. Show that your scope has expanded over the past 12 months, that your influence has grown, and that the most recent evidence is the strongest. Committees are more likely to approve someone who is clearly on an upward trend than someone whose best accomplishment was 9 months ago.
The Promotion Packet Checklist: What Committees Need to See
Use this as a checklist before submitting your packet:
- Executive summary - 3 to 4 sentences stating your scope, headline impact, and why you are operating at the next level.
- 3-5 key accomplishments - each with context, your specific role, and quantified outcomes. Mapped to promotion criteria.
- Evidence of scope beyond your team - at least one cross-team or org-level project.
- Quantified metrics - every accomplishment should include at least one number: revenue, users, latency, cost, velocity, adoption rate.
- Peer review alignment - have you guided your peer reviewers on what criteria to address?
- Manager pre-read - has your manager reviewed the packet at least 4 weeks before calibration?
- Next-level evidence - does the packet show you already operating at the next level, not just excelling at your current one?
For guidance on how to have the promotion conversation with your manager and get the alignment you need, read how to ask your manager for a promotion.
Give the Committee What It Needs to Say Yes
You have done the work. The missing piece is translating that work into a document that a committee of strangers can evaluate in 3 to 5 minutes and walk away convinced.
GetPromoted builds that document for you. A guided 10-minute interview extracts your impact, maps it to promotion criteria, and structures everything into the packet format used at Google, Meta, Amazon, and Microsoft. Executive summary, quantified accomplishments, leadership evidence, and key metrics - all ready for your manager to review and the committee to evaluate.
Preview it free. Pay $99 $79 only if it makes your case better than you could write it yourself. 100% money-back guarantee. A career coach charges $500+ and takes 3-4 weeks. This takes 10 minutes.
Most companies run promo cycles in Q1 and Q3. Do not miss this window.
Stop Reading. Start Building.
Your promotion packet, written in 10 minutes. Free to preview.$99$79 only if you're happy. 100% money-back guarantee.