Today’s post is – obviously – about code reviews. I don’t know how much are you familiar with the topic – maybe you don’t use any reviews at all, maybe you use them in a better way than I do, so I will describe my experience first.
From what I’ve seen so far in Dynamics AX implementations, the usual process is that code is developed, tested by a functional consultant and sent to customers (this applies to ISVs too). Nobody ever looks into code to see whether it has been programmed correctly, so it’s not surprising that most implementations have serious problems with internal quality, e.g. with maintainability. These things are simply not tested at all.
Introduction of code reviews can make a big difference in the overall quality so I promote them a lot and I want to believe that all my past teams appreciated their results.
However, it occurs that it’s not always clear to everybody what code reviews are and why they’re important.
Purpose of code reviews
You may expect that the purpose of reviews is obvious, but the reality shows something else.
I see several important reasons for reviews:
- Testing internal quality – maintainability, testability etc.
- Functional testing
- Knowledge sharing
I think all these things are really important. But some teams degraded code reviews to mere checking of naming conventions and code alignment; even their way of doing code reviews doesn’t allow anything much better (for example: if you look at few lines of code on a projection screen, how can you say whether that code should have been written at all?). They do “code reviews”, but they get almost nothing from them. What a pity.
Let’s take a look at the purpose in detail.
Internal quality
Testing internal quality (“code quality”) is typically the main reason why development teams begin to do code reviews. And that’s right – because such type of testing is often completely missing, filling this gap is crucial.
Not being exhaustive, I can name few important things to test:
- Overall approach – does the code do the right thing by proper means? (If not, we may end with a nice implementation of a really bad idea.)
- Maintainability/extensibility – are we able to easily extend the solution, fix bugs etc.? (This includes many fundamental programming rules, e.g. DRY or SRP, OO design, DB design etc.)
- Testability – are we able to test the code?
- Documentation – are we able to understand why the code was made and what it does?
- Performance – is the design correct from performance perspective? (In AX it means especially client/server optimization. Note that this is not a performance testing.)
Most of these things can’t be tested by usual black-box testing, or only when they resulted in a huge problem (e.g. performance) and that’s usually too late. If this type of testing is missing, it’s easy to get into problems with maintainability, testability, performance… (I’m sure it sounds familiar to many AX teams.)
Functional testing
It’s not uncommon to hear that code reviews shouldn’t test whether code does what it should do. This strange statement has its roots (again) in the idea that reviews should check just things like indentation.
It’s important to realize that code review is a type of testing. It’s a white-box testing, therefore we see something more than black-box testers, and we must use this ability to find problems in the product.
Some bugs are much more visible when looking into code – for example, you can immediately see that some potential error is not handled. Functional testers may not be aware of such a possibility at all (e.g. they don’t know that some values are read from a potentially missing file).
Testing theory often talks about testing boundary conditions, analysis of paths through code and such things. This obviously can’t be done through black-box testing. Also, when you start thinking about all combinations of ways through several methods, you quickly realize that you can’t really test all possible ways from UI. (This is when unit testing proves indispensable.)
Furthermore, some conditions (e.g. race conditions or a state of external systems) are virtually impossible to simulate by functional testers, but may be easily prepared by a piece of code.
You can see that a significant part of testing needs to be done by white-box (or gray-box) testing and the usual approach in the world of Dynamics AX is not suitable. Testing is mostly done by business consultants who often know only little about software testing. But the aspects described above need to be tested be a different type of tester anyway – somebody who can see into code, is able to identify paths through code, simulate required conditions etc. Microsoft has SDETs (Software Development Engineer in Test) for this purpose; we usually have to leave it to experienced developers reviewing code.
Knowledge sharing
Knowledge sharing is not usually the primary motivation to do code reviews, but it doesn’t mean it’s not important. It comes in several different forms:
- Code reviewer(s) gets familiar with the new code, so the knowledge is not limited to the author.
- Discussions between the developer and the reviewer allow sharing the knowledge about the problem and possible solutions.
- Recurrent errors show where additional training is needed.
- Developers become to understand how important it is to write readable code.
I would like to emphasize here how useful it is to review code written by junior developers and spend time to explain them why and how to solve some problems. These discussions over a real code and comparison of their solution with a better approach proved to be very helpful to them.
Should all code be reviewed?
This question is discussed quite often but I don’t think it’s difficult. Of course you don’t have to test the code you deliver to your customers, but don’t be surprised by the consequences.
It’s extremely risky to deliver untested or poorly tested code and wait unless data gets corrupted in the customer system or any other ugly incident happens. Therefore all code should be reviewed (= statically white-box tested). This includes few-line fixes (they’re actually very error-prone) and code fixing issues found in previous reviews.
Tools
It’s always good to automate activities that don’t have to be done manually. Static analysis tools are able to find many potential issues automatically, saving time and ensuring that nothing is simply overlooked. In Dynamics AX such an analysis is done by Best Practice Check and you can easily add new checks covering your specific needs.
That’s my usual answer to people who believe that the main purpose of code review is to verify that all objects use the company prefix – this can be easily implemented as a BP check so there is no reason to waste more time with it than running the check. Reviewers should concentrate to things that can’t be automated.
Timing
Although I mentioned reviewing of the overall approach, you surely want to avoid problems in such things when all code is written, because it may mean that the code will have to be thrown away. Therefore it’s more efficient to review fundamental things before, e.g. in a separate phase of technical design, brief reviews of unfinished code and so on. It’s especially important if you deal with inexperienced developers.
Surprisingly often people forget to reserve time for fixes and so the found issues are not resolved at all (or at least postponed to a next version). That drastically decreases the value of code reviews. This happens because code reviews are taken as something optional, something without a big impact to the final product. It may be true if only naming is checked, but proper review does much more. My recommendation is to consider code review as a mandatory part of the development phase and not to send any code to proper functional testing unless reviewed and fixed. This makes sense also because changes in code due to reviews are quite often and it’s a waste of time to repeat functional testing several times (which often means also a new release to a test environment).
Custom Best Practice checks
If you’re interested, I can dedicate one of next posts to the implementation of custom rules to the Best Practice Check.