Code Reviews can be very useful ways of improving code quality and indeed reducing overall effort. However they can also be fraught, with the participants becoming too involved in matters of personal style rather than objective issues. This is a checklist to help code reviewers work in an objective and fair way.
Code Reviews aim to:
- Trap bugs early
- Decrease testing time
- Improve error detection
- Ensure code conforms to standards
- Ensure code (and comments) are suitable for future development/maintenance.
Secondary benefits can include:
- Knowing that the review is coming helps improve programmer discipline
- Explaining to someone else double-checks assumptions and design
- Spreading expertise in particular components
- Spreading programming skills.
- As everyone knows: other people spot things quicker!
The Review meeting
Attendees might include the following roles:
- Code Reviewer
- Moderator (chairman, quality process manager – sets up meeting, ensures it’s followed up)
- User Reviewer (code user, not end user)
- Maintainer Reviewer (who will inherit the system)
Project managers should not (normally) be involved. The review should not be a performance review (this makes it harder for staff to present openly), and should be limited to technical staff.
The author’s material is distributed before the meeting to the reviewer, user and maintainer. The material should be complete; it is difficult to establish what might be wrong later with unfinished material.
The meeting itself should be an hour maximum. The author might present or the reviewer(s) might walk through. Once an error is detected it should be noted, including some idea of the consequences of not making the change, and the severity of the change required, such as:
- Fundamental architectural change(s) required across the system
- Architectural change required to the component being reviewed
- Major fault restricted to component but requiring considerable rework
- Minor fault requiring less than 10% rework
- Cosmetic change – quick correction probably not requiring review.
Solutions should normally be left to the developer, as discussing them in meetings can rapidly become extended arguments and are not normally interesting to most of the attendees. Moderators will need to enforce this, as well as arguments about style. Either an error exists or it does not, and the meeting must be limited to detecting them.
After the meeting the moderator should summarise the meeting (list of attendees etc) and the list of errors and whether corrections will need to be reviewed.
- Does the code fulfill all the design requirements? Where not, is it clearly marked?
- Is it usable? (What’s required to run the software)
- Are document statements unambiguous?
- Have the specified interfaces been implemented?
- Is the logic correct? Is the order of events and decisions correct?
- Do loops increment/terminate correctly for all inputs?
- Is all the code reachable?
- Is all pointer manipulation correct?
- Are strange code constructions justified?
- Are indices to arrays correct?
- Are data overlays correctly mapped?
- Are entry and exit conditions correct?
- Has rounding and truncation been accounted for?
- Are race conditions and locks properly implemented when multi-tasking?
- Are out-of-range/illegal arguments dealt with correctly?
- Does the code conform to the CodingStandards? If it does not, does it say why not?
- Does the code do what it is expected to de (from the method name, class, comments, etc)
- Is the code clear? NB this can be a style issue. Stick to obvious nasties and obvious misleading constructs.
These apply to both javadoc (external) and implementation (internal) comments:
- Do the comments match the code?
- Are the comments meaningful? concise?
- Do implementation comments say ‘why’, not just ‘what’?
- Are interface stubs and functions identified?
- Are outputs detailed for all inputs?
On errors and exceptions
- Does the code trap/deal with special circumstances? (eg, null or zero or otherwise out of range parameters, empty collections, etc)
- Will failure leave the instance/class in an properly defined state?
- Are failures in called methods handled appropriately in calling code?
- For test cases (if present), do the test cases exercise the error handling code?
- Are test harnesses defined or identified?
- Has the component been suitably exercised (incl extreme inputs, midpoints, possible single-inputs, null inputs).
- Does the component conform to size and performance limits?
While this page is meant for code reviews, similar practices are suitable for document reviews (such as functional requirements, user manuals, etc)