What Happens When a Few People Review Most Pull Requests?

Discover how few overloaded reviewers may be putting your code quality at risk—and uncover strategies for fair, effective reviews.

Code review

The pull request review process is one of the most resource-intensive tasks for development teams. While these reviews offer many benefits (we fully recommend them!), it’s essential to analyze whether they are truly making an impact on our development process—and how we can maximize that impact.

We asked ourselves: Does the concentration of reviews affect the effectiveness of the code review process? In other words, if one person or a small group is handling most of the reviews in a team, how does that impact the quality and value of those reviews?

At Teambit, we focus on understanding how to improve productivity and quality in development teams. That’s why we aim to answer these kinds of questions with data rather than intuition.

Measuring Review Concentration

To explore this question, we analyzed hundreds of thousands of pull requests across various organizations and calculated what we call the “adjusted review concentration.” This metric tells us how much the distribution of review workload deviates from an ideal balance (where everyone reviews equally) and how much of the workload falls on a team’s “top reviewers.”

Teambit showing review workload distribution across the team

Simply put, if a team has an adjusted concentration of 1, it means reviews are evenly distributed. If the concentration is 20, it means one person is reviewing 20 times more than what would be “fair” based on the team’s size.

In our study, we found values ranging from 6.4 to over 93. In some teams, a single developer was doing almost all the review work.

Does This Affect Review Effectiveness?

To measure review effectiveness, we focused on the percentage of reviewed PRs that had no impact on the code—meaning reviews that didn’t result in any significant changes or improvements.

We found that this percentage ranged from 48% to 94%, meaning that in some teams, the vast majority of reviews weren’t driving meaningful changes.

Is There a Correlation Between Concentration and Effectiveness?

To determine if there was a relationship between review concentration and effectiveness, we used Pearson’s correlation coefficient.

The result was a coefficient of around 0.33, indicating a positive correlation. In other words:

Teams where code reviews are highly concentrated among a few people are more likely to have a high percentage of reviews with no impact.

It’s not an absolute rule (there are exceptions), but the overall trend is clear.

Why Does This Happen?

Several factors could explain this:

Reviewer fatigue: If one person is handling most of the reviews, they’re likely to do them quickly and superficially, simply due to lack of time.

Lack of diverse perspectives: A single reviewer brings only one point of view. When more reviewers participate, there’s a higher chance of spotting issues and offering valuable insights.

“Rubber-stamp” reviews: Sometimes, the primary reviewer might approve PRs without suggesting changes just to avoid becoming a bottleneck.

• This often happens when only the team lead is responsible for approvals, turning the process into a bureaucratic formality rather than a meaningful review.

How to Improve Review Distribution

If code reviews in your team are overly concentrated in one or two people, it may be time to rethink your strategy. Here are some ideas:

Encourage distributed reviews: Make sure everyone in the team participates in code reviews regularly.

Automate the obvious: Use linters and automated tests to reduce review workload for minor issues.

Define clear review standards: Document what is expected in a code review to prevent shallow reviews.

Rotate responsibilities: Ensure different team members take part in critical reviews.

Final Thoughts

This study helped us validate something we suspected: code reviews are not just about quantity, but about quality and fair distribution. If one developer is shouldering most of the review workload, the process might not be as effective as it seems.

At Teambit, we offer a module dedicated to monitoring reviewer distribution and review impact. If you’d like to learn more, feel free to reach out!

Related Articles

All Resources
From theory to practice: implementing DORA metrics in your organization
Software Development Productivity Metrics

From theory to practice: implementing DORA metrics in your organization

DORA metrics have become a standard for measuring performance in software development teams.

What is Software Engineering Intelligence (SEI) and Why Should You Care as a CTO?

What is Software Engineering Intelligence (SEI) and Why Should You Care as a CTO?

Finally, we have powerful tools to support management work in software engineering teams.