Why does a code review have to be blocking?
Although immensely popular in open source projects, does the blocking nature of code reviews via Pull Requests really help in the closed source projects? This article explores the downsides of a blocking code review and reasons with its alternatives.
Code Reviews have long been and will always be an integral part of the software development lifecycle. They, along with Pull Requests were further popularised through open source community, where contributor(s) would raise a pull request against a change-set they’re wanting to introduce and then the owner(s) of the repository would review and either accept, request changes or reject the PR. Naturally, the change-set is kept from getting merged till the entire process is completed, rendering it blocked from getting shipped.
The pull requests and code reviews have been aggressively adopted in “closed” source projects too. But does the blocking nature of code reviews via PRs really help in the closed source projects? I’ve long proposed PRs and code reviews in teams, more so in new(er) teams. As time progressed, and as I read more (and listened to more talks) and learned from others’ experiences, I started realising the traditional code reviews have more demerits than merits. Let’s try to reason along.
Note: Unless explicitly stated, I’ll be referring to the blocking code reviews in context of code reviews done via a Pull Request (or PR as referenced in rest of the article), where the code awaits (hence blocked) for merging till a code review has been done to satisfaction.
What are the benefits of good code reviews?
Before we jump into analysing the pitfalls with blocking style of code reviews, let’s agree upon why code reviews (whether blocking or not) are important in software engineering. Broadly speaking, benefits of a good code review are as follows:
It ensures design and implementations are consistent across the project
Helps check for potential vulnerabilities in the code and logic errors.
Fosters knowledge sharing through discussions involved between contributors and reviewers.
Helps validate the developed feature against the intended scope and desired outcome.
So, whats wrong with blocking code reviews?
1. Increases the Lead Time
Most of the assumed and agreed upon benefits of a code review come at the cost of blocking the code from getting to trunk(or master/main branch) and eventually blocking the developers from picking their next story. This blocking will increase the Lead Time and also make the feedback loop only bigger.
Not forgetting to mention the plethora of merge conflicts the review will continue to accumulate the longer it “sits in the review lane”.
The biggest impact on the lead time, although is the context switching and overhead of asynchronous communication. (for both the submitter and the reviewer alike). Dragan Stepanović explains it very beautifully in one of his slides:
2. Blocking code reviews are not Continuously Integrated.
Just because you’re using a top notch “CI/CD” tool, doesn’t guarantee that your team has either of Continuous Integration or Continuous Delivery.


Pull Requests used in conjunction with feature branching imply anything but continuous integration. Although few ardent advocates of TBD and CD argue that a code kept “long enough” on local master instead of pushing it to remote “frequently” (at least once a day) is still a local branch, I’m only referring to long lived branches and explicit feature branching here.
“Feature branches are by design intended to hide changes. Continuous Integration is designed to expose changes. The two are mutually exclusive.” @DaveFarley
3. Prone to “Pseudo Rubber Stamping”
No one wants to stand in the way of a unit of work and block it from happening. The more critical or bigger the change-set to be reviewed is, the more time it is going to take to review it. The more time it will take to review, the more it will block. We’re more eager, in such cases, to just rubber stamp the PR with a “LGTM” (looks good to me) and let it move on.
It is not that larger PRs either have too much or too little discussion, but rather they have a larger variability in their amount of discussion than that seen in smaller PRs.
There is an interesting and detailed study done by Jellyfish which is available in their blog, which suggests that amount of discussions (review comments and replies) vary greatly as the size of the PR increases. The variability in the discussion suggests that not all the PRs are getting the due attention that they require. As a result, not all the PRs are getting reviewed as we would like in the first place. This just defeats the very purpose of a code review itself.
It’s also implausible to expect the individuals (and not the team as a whole) to understand all the aspects of the system themselves. Eventually expecting them to be able to do all critical code reviews with due diligence is a fallacy! This starts becoming more evident as both the code and team grow in size.
Woods’ Theorem: As the complexity of a system increases, the accuracy of any single agent’s own model of that system decreases rapidly.
4. Dilutes responsibility and ownership
In teams where there is over reliance on few seniors (maybe a tech lead or an architect) to be able to do critical reviews, not only have bigger bottlenecks, but also keep these seniors busy enough to keep them from picking up other meaningful and important work themselves.
The moment a code is sent for review, the ownership of the code shifts from the original dev (who wrote the code) to the reviewer. This can be a challenge. It gives a false signal to the devs that their work is “done”. In reality, it’s not done till its shipped and monitored briefly for sanity after deployment. Any other “done” is really just a facade!
5. Impedes effective knowledge sharing amongst the team
It has been long argued and advocated in the community that code reviews help share knowledge within the team. With code reviews, not only the team aligns itself with the consistency in coding and designs, but also discovers and shares new and better techniques to do things. Adding on, I would argue that blocking and async fashioned code reviews are in general antithetical to knowledge sharing.
With async code reviews, mostly the devs partaking in the code review are aware of the conversations. It’s quite likely that other team members will repeat the same mistakes/pattern unless the entire team becomes aware of the knowledge.
Knowledge sharing at the time of async code reviews is already too late in most cases! It’s being reactive as opposed to proactive. If the knowledge was shared upfront (and with the entire team), the time spent in reviewing and verifying the consistencies and design could’ve been saved and put better use of.
Can we do things differently?
We all agree without a doubt that code reviews are important. We also reasoned certain pitfalls when it comes to do code reviews in a blocking and async fashion. Is that the end of road or can we do things differently to reduce the pitfalls, and still achieve the same benefits that we set out to achieve in the first place?
1. Pair programming and continuous code review
With pair/mob programming, you get a continuous and instant feedback on the code. The review is happening for every line of code that is being put, in real time.

Least context switching happens. The pair’s or mob’s single-most and only responsibility at the given time is the code they are working on. Doing synchronous review as they code and reason along is a huge bonus!
There is no waiting for reviews and then making changes and getting reviewed again. In most cases, the cycle is longer. With pairing or mobbing, every participant is present all the time.


The Lead Time is immensely reduced.
2. Rotate your pairs
Effective pair rotation helps in keeping the knowledge silos to a minimum.
Frequent change in the pair of eyes looking at the functionality in progress can bring in fresh perspective as well as uncover things which previously were hidden.
Makes software a shared and social responsibility.
3. Host “Show and Tell”
Organise frequent “Show and Tell” in your team.
Talk about the design vision more often.
Showcase the new techniques of implementation that you discovered.
Talk about the desired way of writing code.
Discuss the debts that your code has accrued over time and how can it be dealt with.
Equally important, showcase the learnings from code failures and bugs.
So Pair Programming is the silver bullet, you say?
Well, it’s kind of a rhetorical question to ask. If every team and every org could work through pairing and mobbing, we would see it more often than we do now.
There are several cases where Pair Programming or Mobbing might not be effective:
Lack of proper infrastructure and tooling.
The teams are more leveraged than they should be. The ratio of experience levels and tenure in the team is a skewed one.
There are multiple (sometimes even two are more than sufficient) geologically separated teams. The teams are dependendent on each other and neither truly distributed nor autonomous.
Low to no psychological safety. In a low psychologically safe environment, pairing can be a daunting task. It can come across as being “monitored” or being “watched upon”.
Teams who do not understand pragmatic pairing. And treat it like dogma!
Critical code reviews, like pertaining to governance and compliance (often related to security) and only few in the org/teams having the complete knowledge of it.
Introduces fatigue if not done with agreed upon pairing session timings and breaks.
Closing Thoughts…
The aim of this article is not to propose adopting an absolutist approach. I understand that “it depends”. I understand that one methodology might not be applicable to different engineering teams. I also understand that the same methodology might not be applicable to the same team, all the time.
This article is rather an attempt to highlight the trade-offs which a team inherits when it employs blocking code reviews. These trade-offs might not be justified for every type of code review. Maybe there are few which require more detailed attention than the most. My advise would be to choose carefully. Don’t make it a dogma! Use your judgement wisely!
Why does a code review have to be blocking? was originally published in Geek Culture on Medium, where people are continuing the conversation by highlighting and responding to this story.