Peer Code Review | A Definition of "Ready for Review": Part II
- erinvh620
- Aug 4, 2024
- 10 min read
Updated: Nov 6, 2024
Intended Audience: software developers, software development team leads, software development organisation leadership
TL;DR Software development teams can save time and avoid frustration by establishing a shared definition of "ready for review."
Gotta Love Some Good Infrastructure
Years ago, I joined an iOS team that had automated much of its workflow. One of my favourite bits of automation was called “the PR runner” (PR stands for pull request). The PR runner was a job that our continuous integration (CI) tool ran when a new PR was created, an existing PR was updated, or when manually triggered. Once triggered, the PR runner built the project and ran a series of checks. The results of each check were then displayed with green or red indicators, showing which checks passed and which failed.
More Info: Pull request A pull request, commonly called a PR, is a set of changes to a body of code. A developer creates a PR to propose (i.e. request) that those changes be "pulled" into or applied to the main branch of development. Pull requests facilitate collaboration by allowing other developers to review, discuss, and suggest improvements to the proposed changes before merging. A PR might also include text, tables, charts, images, or diagrams providing additional information about the code changes. Some PRs will include results for tests run on the code proposed in the PR. GitHub and Bitbucket have popularised the "pull request" terminology. However, other source code repository management tools use different terminology. Some companies also have internal terminology: - GitLab, a tool similar to GitHub and Bitbucket, uses the term "merge request." - At Google, software developers use the term "changelist" or CL. - At Meta, the term is "diff." - At Apple, where they use Git internally, the term is "pull request." - At Netflix and Amazon, the term is "change request." All of these terms mean the same thing. |
The PR runner should have executed our user interface (UI) test suite. However, this was not feasible, as running the complete suite of UI tests took two hours. We worked around this by allowing developers to specify a list of select UI tests for the PR runner to execute. Additionally, every night after regular working hours, the complete UI test suite was run on the latest version of our main development branch. Although not perfect, this approach was the best solution given the circumstances.
At one point, the team noticed an unusually high number of UI test failures overnight. After investigating, we found that new team members were not specifying UI tests in their PRs. As a result, the failures were detected overnight instead of during the PR review. The iOS team had been growing quickly, and there’d been a communication breakdown about the process.
New joiners make excellent chaos monkeys, and this situation showed us that we’d missed a trick with the PR runner: the PR runner should never have passed a PR with no UI tests specified. We had a brief team discussion and decided it was OK for developers to specify an empty list for the UI tests; that showed intention. The problem was when no list was specified; that signalled oversight. It took one of my co-workers about five minutes to update the PR runner with our new logic.
That discussion was about our team’s definition of “ready for review”. We didn’t think of it that way at the time. But we were saying, “If you have not considered which UI tests should be run to check for regressions, then this PR is not ready for review.” In hindsight, I realise that the PR runner was essentially an automated check for adherence to our team’s (non-explicit) definition of ready for review.
In a software development team, having a clear and shared understanding of what constitutes “ready for review” is crucial for ensuring quality, effectiveness, and efficiency. Well-defined “ready for review” criteria ensure that code reviews are productive and meaningful. Teams risk miscommunication, wasted effort, and subpar code quality without this shared understanding. Automation, like the PR runner, can play a pivotal role in enforcing these criteria, making the review process smoother and more reliable. This post will explore why a team-wide “ready for review” definition is essential and how it can transform your development process.

A Team Definition of Ready for Review
A Definition of Ready for Review (DoRfR) usually includes adherence to coding standards, team standards, and team processes. Here are some ideas for what a team might include in their definition:
Code completeness: All code is fully implemented, and any incomplete or placeholder code is removed. All merge conflicts have been resolved.
Testing: Unit tests, UI tests, integration tests, and other relevant tests are written and passed.
Documentation: Inline documentation is present, and any necessary external documentation is updated.
Code Style and Standards: The code follows the team’s coding standards and style guidelines. (Note that coding standards and style guidelines should be defined separately from your definition of ready for review.)
Self-review: You have thoroughly reviewed your code to catch obvious issues early.
Dependencies: All dependencies are up-to-date and properly managed, ensuring the code can be integrated smoothly.
PR Standards: The PR has a description which includes a link to the issue. PRs that involve UI changes include annotated before-and-after screenshots. PRs that add or change network calls have a log of the network traffic attached.
These elements can be adapted or expanded based on the team’s specific needs and preferences. This is by no means an exhaustive list.
Tailored to the team’s needs
A team’s DoRfR should be tailored to its specific needs. Your team might prefer a concise and to-the-point definition or a comprehensive and detailed definition. The definition can be formatted as a paragraph, a bulleted list, or a checklist. The optimal approach will usually depend on several factors, such as the size of your team, how long the team has been working together, and the varying experience levels of team members.
A team of developers with 10+ years of experience who have worked together for several years can get away with a concise definition because they have lots of shared context. In a situation with no shared context among the development team, like an open source project, a team with high turnover, or lots of junior developers, the DoRfR must be more detailed and comprehensive.
When determining a process, adopting the lightest process that effectively meets your goals is advantageous. In the same spirit, you’ll want to go with the simplest definition that still achieves the goal: a clear team standard for what is considered “ready for review.”
A collaborative and inclusive process
It’s critical that a team’s DoRfR is not dictated from above but is collaboratively developed by the entire team. This inclusive approach ensures that all team members have a sense of ownership and buy-in, which leads to greater adherence to the definition. Encouraging input from everyone, regardless of their role or experience level, fosters a more comprehensive and practical definition. Regularly revisiting and refining the DoRfR based on team feedback can help keep it relevant and effective as the team evolves.
Make it visible
Your team can have the best DoRfR in the world, but it’s useless if team members aren't familiar with it or can't easily access it. Here are some ideas to ensure your team’s DoRfR is always accessible:
Post in the Office: Display the definition on a wall near the team’s workspace.
Slack Channel: Include the DoRfR in the description of a relevant Slack channel.
Codebase README: Add the definition to your codebase’s README file.
Process Documentation: Incorporate the DoRfR into the team’s documented processes.
Creative Placement: Post the definition on the back of bathroom stall doors.
Home Office: Create a PDF for team members to print and display in their home offices.
PR Template: Turn the definition into a checklist within a PR template.

Benefits
Having a shared Definition of Ready for Review offers numerous benefits.
Produce high-quality output
A team DoRfR will increase the quality of PRs submitted for review. The quality of a process’s output (such as peer code review) is limited by the quality of the input. We want to submit PRs of the highest quality for review. Only then can we get the highest quality out of the review process.
Karl Wiegers said it best in the first chapter of his 2002 book Peer Reviews in Software: “In a multistep process such as software development, the quality of what comes out of any process step is limited by the quality of the inputs to that step.”
Save valuable developer time
A shared DoRfR prevents the wastage of developer time. Clear expectations reduce unnecessary back-and-forth communication during peer review, thus minimising frustration for both the PR author and the reviewer.
Clarify expectations
With well-defined criteria, all developers know precisely what is expected, leading to more consistent and predictable peer code reviews. This eliminates arbitrary reviewer feedback based on personal preferences.
Additionally, clear expectations are invaluable for new joiners and those from diverse cultures or backgrounds, providing them with a concrete understanding of what is required. A shared definition fosters a more efficient, inclusive, and effective review process.
Avoid unnecessary friction
Consider Bob, a software developer with years of experience and a great attitude towards peer code review. One day, he submits a PR for review. A few hours later, his colleague Sarah returns the PR with feedback. The first comment points out an extra newline between methods in Bob’s code: “extra newline.” Bob feels annoyed—he worked hard, and all Sarah’s focusing on is whitespace. Sarah is also frustrated—why didn’t Bob take a moment to fix the spacing?
An agreed-upon DoRfR can prevent these kinds of unnecessary disagreements. Whitespace is a common source of friction among developers. Debating this once as a team and including the decision in the DoRfR eliminates future debates and saves time.
When Sarah reviews a PR that doesn’t adhere to the standard, she can say, “The team standard is one newline between methods” and refer the PR author to the DoRfR. This approach makes feedback feel less personal and more objective, reducing friction between team members.
Automation
Once your team has established a stable DoRfR, the real fun begins. Instead of manually checking whether your PR meets the DoRfR, you can automate the process!
In this scenario, a linter would catch the extra newline in Bob’s code before it causes friction between him and Sarah. An automated formatter could even fix it for him. Imagine peer code reviews where reviewers don’t need to check formatting, and authors never get feedback on it. That’s the world I want to live in!
Automation can eliminate a lot of the friction involved in peer code review. The computer tells the PR author what they’ve missed and when their PR is ready for review. A machine enforces the DoRfR, not a person.
What about reviewing "early and often"?
You have probably heard the saying that feedback should be solicited “early and often”. But how do we reconcile that with a strict Definition of Ready for Review? I addressed this in my previous post.

Personal vs. Team
If you read my previous post, you’ll know my opinions on having a personal DoRfR. I believe all software developers should have one. It’s a great way to demonstrate professionalism and to continuously improve. Now that I’m discussing a team definition, you might wonder how the two fit together.
The main difference lies in who the definition is tailored to. A team definition is customised for the team and its processes, while a personal definition is tailored to your specific needs and personal workflows.
Key Differences
1. Audience and Customisation
Personal DoRfR: Tailored to your own needs and processes, focusing on your professional growth and personal standards.
Team DoRfR: Designed to align with the team’s workflows, ensuring consistency and collaboration across all members.
2. Impact and Implementation
Personal DoRfR: Helps you maintain a high standard of work and present yourself professionally.
Team DoRfR: Facilitates automated checking of checklist items and integration into PR templates, enhancing the overall efficiency and consistency of the team’s workflow.
Advantages of a Team Definition
Having a shared team definition opens up numerous opportunities. It allows for automated checking of items on the list and adding a checklist to a PR template. This automation streamlines the review process, making it more efficient and less prone to oversight.
Additionally, a team definition makes the peer code review process measurable. It provides a definitive entry point for reviews. If your team also defines exit criteria for the peer code review process, you can measure the time a PR spends in review, giving you valuable data to analyse your process.
With this data, you can run experiments to optimise your workflow. For instance, you can test if requiring two reviewers speeds up or slows down the process, or you can analyse how lines of code (LOC) relate to review time. This data-driven approach can lead to continuous improvement in your team’s performance.
Complementary Use of Personal and Team Definitions
Ideally, you should use both a personal and a team definition. A personal DoRfR ensures you present yourself professionally and continuously improve. A team DoRfR ensures the team functions smoothly and performs efficiently. The two complement each other, creating a comprehensive approach to maintaining high standards in code reviews.
The only exception is if the team definition already encompasses everything you would include in your personal definition. In such cases, the team definition alone might suffice. However, maintaining a personal checklist can still be helpful for professional development.
Action Items
Developer Action Items
Propose a Definition: If your team lacks a definition of "ready for review", suggest developing one. To initiate the conversation, identify recurring issues that could be resolved with this definition.
Use Existing Definitions: Familiarize yourself with existing definitions or PR checklists. Ensure you use them and suggest improvements, such as better accessibility or creating a PR template.
Create a Personal Definition: If your team isn’t interested in a shared definition, write one for yourself to maintain high standards. This will demonstrate your professionalism and be helpful if the team later adopts a shared definition.
Team Lead Action Items
Establish a Definition: Discuss the benefits of a shared definition with your team. If agreed upon, set aside a short meeting (about 30 minutes) to create one.
Publish and Enforce: Ensure the definition is easily accessible to all team members and foster a culture of consistent use. Update the definition as needed.
Automate Checks: Empower your team to automate PR checks against the definition. Support team members interested in this, helping them develop necessary skills and recognise their efforts.
Technical Organization Leadership Action Items
Promote Shared Definitions: Consider how shared definitions can solve organisational issues. Suggest this approach when team leads or developers face problems.
Avoid Mandates: Do not mandate definitions for all teams. Allow teams to decide if they need one and avoid imposing a generic definition from above.
Next month, I'm deciding between diving deeper into automating a team DoRfR or moving on to another key aspect of peer code review: setting up your reviewer for success. I'd love to hear what you're most interested in! Leave a comment below to let me know which topic you'd like to see covered. Your feedback will help shape the next post!




Comments