Software Testing: Conclusion

Michele Lindroos profile picture
Michele Lindroos
by Michele Lindroos

With the release of “Reproducible Tests”, the blog series about Software Testing has reached it’s conclusion. In this blog post we wrap the series up by discussing what was left out, conventionally considered “future work”.

As I’m writing this, more than half a year has passed since we published the two first blog posts in the series, as “The Beginning” and “Types of Tests” were published in January 2022. However, the writing process started long before that. As I recall it, the first versions of a few blog posts were sent for review in 2020. Collecting ideas started way earlier than that. The main ideas for the blog series started cooking up in 2017.

Why did I decide to start writing this blog series? Apart from benefitting the readers, writing non-fiction is often beneficial to the author. That was my main reason to write this blog series. I wanted to see if my vaguely formed ideas inside my head would hold up on paper. Furthermore, I wanted my colleagues at Omoroi to review the ideas.

I am extremely thankful for each and everyone who contributed comments to the early drafts of the blog posts. Especially Jukka Lehtniemi and Tuukka Koistinen provided invaluable feedback. These two gentlemen made the blog series far better than I would’ve ever been able to accomplish on my own. A huge thanks to Jukka and Tuukka!

Topics not covered in this blog series

Although my main feelings about the blog series is that I’m proud and happy, it doesn’t mean it’s anywhere close to being a complete blog series about Software Testing. The blog series contains the subjects in Software Testing that I have extensive first-hand experience in (and even one chapter I have very limited experience in, namely “Test Driven Development”). But just because I don’t have personal experience doesn’t mean it wouldn’t exist. Let’s review a few topics left out in the blog series.

One such topic is Fuzz Testing. In Fuzz Testing the main idea is to test our product as a black box. We tell the fuzz tester, a computer program, what kind of valid inputs there are for the product. Then, the fuzz tester starts “fuzzing” the input, modifying it in creative ways and testing if an invalid input is found that crashes the product.

Fuzz Testing is typically run completely separate from all the rest of testing. While a CI machine typically runs all units tests, integration tests etc, fuzz testing is typically setup as a dedicated environment running around the clock. As there’s an endless amount of mutations that can be done to the input data, fuzz testing is never “done”. Instead, the longer you run, the more you may be able to find.

Another important subject that the blog series leaves out is Visualization of test runs and results. As a developer, I’m mostly concerned about the microdynamics between tests and code. I have vast experience in how useful code grows fast as product management want more out of it, and to be able to have fast progress, I write automatized tests so that I can focus on new code without having to worry about breaking existing code.

But if we zoom out and look at the macrodynamics of software testing, there’s more to it than just what makes a single developer efficient. We should be able to visualize the testing we are doing and the results we are having. This makes tracking regressions and fixing bug reports faster. Likewise, it can help us understand what kind of testing is valuable to us, i.e. what we should focus on and what is less useful.

Testing embedded systems and software can be very different because the hardware is typically in development, too, and can have defects. Therefore, usage of JTAG interfaces can be valuable. You’re not only testing the software, but the hardware, too, and collect data from the JTAG interface.

Exploratory testing is a form of manual testing where the tester does not execute a predefined script, but decides ad-hoc what to test. Challenge here is to create structure to test execution. Usually this requires familiarity of the product. Furthermore the expertise of the test engineer is being utilized more compared to scripted test executions.

In user experience testing we collect data how the user uses and perceives the product. One of the most common techniques is to use A/B testing where two sets of users, possibly of different sizes, get a different version of the product. Then, data is collected and analyzed. Ultimately, the version of the software that produced better customer engagement and satisfaction is chosen.

Property-based testing allows us to create test cases beyond what we write ourselves. Rather than writing the tests, we write properties, and then a library creates test cases to verify these properties hold.

Finally, one more important topic that has not been covered in the blog series is testing concurrent code. Multi-threading and concurrency is hard to test since execution is no longer reproducible. A valuable tool to help test correctness of multi-threaded code is Helgrind which can automatically check for races and other problems with thread synchronization.

So, when will we write blog posts about these? As said, this blog series is comprised of ideas collected over many years. If I will gain experience in Fuzz Testing or get more involved in Test Result Visualization and everything valuable it can bring, I’m sure to write about it. But for now this blog series includes everything I have experience in, and even a little more.


The creation of this blog series was done in a span of many years. Although the Software Testing blog series covered wide range of subjects, it is not an exhaustive collection of software testing technologies and techniques. In this blog post we covered some testing methods that were left out of the blog series.

This article is part of Omoroi’s blog series on Software Testing. You can reach us at

Latest from the blog