My first tech job was at a large insurance company, in an IT department of 60 people. I started as a trainee developer, and slowly started learning software development.
After a number of years, I noticed an internal vacancy for a test manager*. I remember reading the job spec and thinking, wow, this sounds like something I’d be interested to try. I showed an interest, but my line manager steered me away, saying he thought I should focus on continuing to build on my development skills.
However, a few weeks later, management discussed the role, and it sounded like most of the group was keen on allowing me to move into this position — despite my manager still being unsure. So, I got started.
*A few people were hesitant about giving out a “manager” title. I ended up being “System Test Coordinator”.
What is the purpose of testing?
As part of my test manager role, I attended two training courses. First, the ISEB Foundation in Software Testing. Second, an in-house test management course at head office. The second was fairly boring and I didn’t remember much of it once I returned to my normal workplace. However, the first gave me a qualification, and some genuinely useful things to take back to work.
In the ISEB course, the instructor opened by asking:
What is the purpose of testing?
Keen to make a good impression, I said “to check it works”. I don’t know if the instructor was hoping for a different answer, or was glad to be able to give the “correct” answer. He went on:
No, the purpose of testing is to find faults.
He explained that it’s a lot easier — or more achievable — to prove when something doesn’t work, than prove it does work. (A side note: this instructor spent a fair bit of time explaining that a fault is not the same as a defect. This is more to do with defect categorisation. In practice, as a tester, I report issues as bugs and allow the developers to determine the cause of those bugs.)
And so this quickly became a line I shared with colleagues: Don’t test to make sure it works — test to find what’s wrong.
Years later, at a weekly Tech department presentation at a different company, the QA manager gave a talk. He asked that question again, or a version of it:
What’s the point of testing?
In his view, it was fine to test to find bugs, or to check it works, or for other reasons.
The description I like best is that as a tester, you are supposed to assess the quality of the product. (Source) This can mean looking beyond the scope of a specific change, and looking for potential side effects.
What does a good bug report look like?
Having worked as a Scrum Master on a couple of occasions, my first instinct is to answer this question by mentioning a Definition of Ready, or to mention using a template. But there’s more to this than how the bug report is structured.
Sure, there are a few things that are helpful to include: a screenshot of the error with an accompanying link, steps to reproduce the issue, expected results and actual results. That’s all well and good. But following that format doesn’t automatically make it a good bug report.
The tester needs to communicate the severity of the issue, so the team can distinguish between the major, moderate, and minor reports, and also any showstoppers.
The tester needs to categorise the issue correctly, depending on the issue tracker or project management tool used. Testers should know the ins and outs of these tools better than anyone.
Where relevant, the tester needs to crosslink related issues in the same area, to illustrate the cumulative impact of “rot” in any part of the system. A small bug might not be worth fixing on its own — but as part of a larger collection of bugs, this can strengthen the case for fixing a whole section of the software.
Above all, the tester should aim to write bug reports that are hard to dispute. From reading the bug report, it should be clear both what the issue is, and also the impact. The bug report should be reproducible: intermittent issues, or things you’ve not yet pinned down to a specific cause, are harder to fix and can waste a developer’s time. Overall, a tester should make sure their bug reports are written with conviction, getting the reader to see why the issue needs fixing, and — ultimately — giving the developers an irrefutable bug to address.
Don’t automate too much
Test automation is a big topic. For some organisations, the holy trail of testing seems to be — automate all of the things! But automation is expensive, and can become a time sink if you’re not careful.
Automation shouldn’t be a blanket statement — and it isn’t a replacement for any and all testing.
There are a few things you can — and should — automate. Unit tests are a key part of test driven development, and are worth writing (by their nature, they are automated). Certain integration tests, regression tests, or “smoke tests” — that is, testing an environment is functional once it’s set up — are all candidates for automation too.
The extent to which you should automate these largely depends on your team size and experience, and your ability to keep up with future changes to the software by similarly keeping the tests updated. Also, consider the time to automate each part of the system, and whether the capabilities of your test tools will make automation possible. Test your test tool first — don’t just plough on with automating everything.
Remember that automation will only test as far as you ask it to. Don’t treat, say, a passing set of unit tests that the software quality is good. Look at expanding coverage of negative tests — don’t only focus on “happy path” testing, aka the “just test it works” mindset.
Don’t try to automate exploratory testing, UI testing, or user acceptance testing. These are best done by a person, not by a computer.
And don’t try to phase out the need for any kind of manual smoke testing or regression testing. Sometimes, a few sanity checks of both the software and of your test scripts can go a long way.
In 2021, I’m trying to write more regularly on my blog – hopefully one post per week. See my progress so far here: Weekly blogging in 2021