SoCraTes Day Switzerland is an open-space software craftsmanship conference for those passionate about software development. The following is short summary on take-aways from the sessions attended.
Mutational Testing in Java
The goal behind mutation testing is to evaluate the quality of existing software tests. This is done by changing the bytecode and then running the suite of unit tests to assert that the tests did fail when statements were removed or logic changed. For example, if we have the following code:
public class FooBar {
public int foo(int i) {
App.reboot();
return i + 1;
}
}
with the corresponding test:
@Test
public void shouldReturnFooValue() {
// when
int actual = sut.foo(1);
// then
assertEquals(2, actual);
}
In this case, it is pretty clear that our test is incomplete as there is a side-effect that has not been tested. Clearly, in a clean code example, this function should never have been written but in situations where this does happen to be the case, then by removing the App.reboot()
line, we would expect our test to fail. Given that the test still passes, we have a problem that needs to be fixed. Such problems are known as "mutations" and there are many types of mutations that can be configured.
While this is an amazing tool to radically improve the quality of the tests and code, the drawbacks are that this approach of changing the bytecode and re-running the tests is very slow and it's speed decreases exponentially with the number of tests present.
The current framework of choice for Java is http://pitest.org/.
Manual Testing
A simple yet powerful talk on how tests written by software developers usually fall into the fact checking category, i.e. does the system do what I expect it to do. Given that we write software for other humans, there is a whole set of exploratory tests that are usually the cause for bugs spilling out into production code, the classic example being: "I didn't think that someone would ever use it in this way".
The above was illustrated via a simple example of building a calculator that has 4 buttons, 2
, +
, =
and c
. Testing a simple case of "2 plus 2 equals should display 4 and then pressing c
should clear the display" turned out to be harder than expected:
- 2 plus 2 should display 4 and then c should clear it
- Run the above test again
- Try waiting some time before pressing buttons (10 mins)
- Try pressing the buttons for a long time (10 mins)
- Run the test on a Monday
- When 2 plus 2 equals is pressed does the 4 display for long enough to be seen?
- Do the digits on the display appear fuzzy?
- When performing the calculation, does the calculator become hot?
The conflict of interest in testing arises from the thinking model of a developer vs. that of a tester. The developer wants to find a solution in a countably infinte space /\
, whereas the tester wants to find out what can go wrong in an infinite space \/
.
Continuous Refactoring
Simply put, continuous refactoring is the best practice in general during software development. Some other points gleaned from the discussion were:
- Naming is one of the hardest problems in software development and should be undertaken very carefully
- As much as possible don't leave refactoring to another day - short simple refactorings should be done right away (
git stash
is your friend). Longer and more complicated refactorings should be documented, either in a simple Todo list to be tackled at the end of the sprint or in an issue (either on it's own or linked to a user story affected by the refactoring) - Working on legacy systems means that sometimes refactoring needs to be done without tests - pair programming works the best in this case with just enough refactoring done initially to be able to write a decent test and the cycle is repeated. The review should ideally be done by a third party to minimise domain problems (i.e. developing something that is not what the user wanted)
- structure101 is a usefuly tool to visualise the relationship between modules in a codebase and very useful for refactoring.
- The Mikado Method offers a strategy to perform good refactoring. The general idea is that you take one slice of your system and try to refactor it starting from the output until you drill down to the actual dependencies where the meat of the refactoring would be required. If this turns out to be too complex, then this entire excercise is discarded and a new slice is picked up. However, the dependency graph thus generated is extremely useful in future refactorings.
Recruitment & Interviewing
An interesting discussion on recruiting strategies for software companies and interviewing tips for candidates:
- GitRecruit can be used to find passionate developers.
- Typos in resumes are highly correlated with poor candidates (analysis of year's worth of hiring data).
-
The Joel Test is an excellent filter for good software companies. Follow-on questions include
- What is/was your most costly technical decision?
- Where do your product/feature ideas come from?
- How do you do requirements gathering?
- Hosting hackathons and tech events is also an excellent method of attracting talent.
- A taxonomy of programmers who apply for startup jobs.
How to convince team-mates
A soft-skills discussion on how to get the buy-in of team-mates for certain technical decisions:
- Time-capsuled experiments to allow others to try the new approach.
- Engage others and ask them for help with the new approach, potentially involve them in low level decision-making processes.
- Most importantly, before trying to convince someone else, be open yourself to change!