Victor is a full stack software engineer who loves travelling and building things. Most recently created Ewolo, a cross-platform workout logger.
The merge request checklist

In order to avoid multiple discussion rounds for a merge request to pass, we've come up with a checklist with best practices in our team:

Review your own merge request

This is really important as it helps catch some obvious missing items. Moreover, it is helpful to add comments to help the reviewer out (especially regarding business knowledge, e.g. "client has signed off on this feature", "ceding performance in favour of legibility and maintenance here").

For maximum impact, try and take a break in between finishing the changes and then doing the review.

Analyse API and documentation if any

For any changes, check if any documentation needs to be updated, for example, "as of version x.x.x, the data model now looks like the following", "updating to node 8 means that we can now use async/await".

Our project also consists of services that provide REST endpoints and it is always good to analyse changes to the API - as the application grows, it is easy to add endpoints, difficult to change them and almost impossible to remove them.

Testing and test implementation

We try to follow TDD so it is important that the tests we write cover all potential use cases. Moreover, we have unit tests, integration tests and end-2-end UI tests. Depending on the feature, some tests need to be prioritized above others. For example, if a feature implement some sort of a deep copy method, the priority would be on integration testing to ensure that an object along with all it's children is copied and saved in the database along with all the rules that need to be satisfied (the name of the copies has been appropriately set, etc.). Having a few unit tests which test services that return generated names for copies and a UI test which tests that the feature works is enough for the other tests.

Think about negative tests

A couple of us were lucky enough to attend a really interesting talk on testing where the premise was that as developers we need to take a general requirements and narrow them down to solutions whereas as testers we need to take solutions and expand them into requirements. For example, if we were to test a physical calculator, it is possible that we try to press really hard on the buttons, or test that the numbers are visible under direct sunlight or test that the calculator works every day of the week, etc.

We refer to this paradigm as negative testing and try to incorporate as much of it as possible when writing tests. As an example, if a method returns a list of items, the obvious test would be to return a few items and ensure that they are there given the correct conditions. However, it would also be very instructive to test that there are no items returned given other conditions. Such thinking also helps in simplifying our API in general.

This is quite similar to the above point but it bears mentioning as a separate item as one really needs to put on their testing hat to get a broad picture of all the things that could potentially break the application with the changes proposed.

Clarify when you start reviewing a MR and when you are finished

Sometimes, it is possible that more than one person ends up reviewing the code. For smoother communication, we propose that whenever someone starts a review they add a comment and another comment to indicate that they are finished. We generally do not assign reviews to anyone in particular as it encourages everyone to take a look at changes. Everyone's code is welcome to critique by anyone on the team and even the organization in general.

If a comment was made regarding a concept, check all places where this change might apply as well

Depending on the reviewer, it is possible that a comment made on a particular concept applies to different parts of the code and not just and one specific instance. For example, if the review comment was to check equality in JavaScript using === rather than ==, then it is possible that this error has occured in other places as well.

Conclusion

For those with limited time, the bare minimum that one can do is review their own merge request to check for any glaring mistakes. Testing is a close second and taking a few minutes to think about all the potential edge cases can lead to a very stable user experience.

I hope that you found this checklist helpful and do not hesitate to get in touch with your merge request experiences :)