Becoming a better Tester!
December 31, 2020
⌚ : 6 min
                                     Click to find relevant articles:

Practical issues of implementing Conference-time advice

It’s 2020, and the pandemic is going on. Everyone (at least from the IT industry) is working from home. Conferences and meetups are virtual and free to some extent or at an economical cost.

In all these forums,

Once we’re back in the real world, we want all the buzzwords to be implemented in our project. We try to apply the lessons from the talks when things get warmed up in the project, but as things speed up; the reality strikes!

E.g. :

Keep the bugs in the backlog for the next (or a future) sprint: Either the unit test coverage was not complete, or testers were not involved in requirement discussion or tester thought from a user perspective and they found the defects. When the team thinks testers bring value to the table, these kinds of bugs would be captured in AC itself. Make it a practice to add a new AC while Requirements are getting groomed.

Timelines are stringent. You are asked to test all the user stories in just 2 days and provide the sign-off.

This is because the client thinks we automate everything and sign off can be given even in 1 day. Client/team doesn’t know whether automation within the same sprint is possible or not. Or things are missed because of the delivery pressure. Or the team slogs for extra hours in the evening or over the weekend.

Test cases have been declared as passed, right? Then surely we can release! : Everyone in the team just waits to see that green pipeline to merge the code. Yes, smoke/ regression test cases do cover the risk that is checked via automation. But if the various Acceptance Criteria have not been verified via Automation, or if the team has performed none of the exploratory or user-based testings, then it’s quite a high risk to just go on a green pipeline.

We missed telling you, but yes, “we” (dev and PO) discussed, and this is the recent change.: You are in the middle of testing; you managed some automation too but suddenly the automation fails on the new build. Upon investigating, you come to know that there is a change and that the testing team was not involved in the discussion. This brings frustration, and while you want to finish adding this new change, you don’t get time to think about what could go wrong aside from the ACs listed.

If a change were communicated to both Testers and Devs together, preferably well ahead of time, then Testers would have the opportunity to think about and identify new risks earlier.

We don’t have time to add those element IDs, keep it in the backlog.: Devs have a lot of pressure to finish the user story, and they might miss adding the elementId for automation. This delays the in-sprint automation and adds extra work for the next sprint.

When such support for Automation is eventually added, Testers who should have been experimenting with the product or analyzing the events instead now have to work on automating tests for earlier requirements to finish the automation coverage.

If we add this as an Acceptance Criteria in the User story itself, we can have more time to do craft-related activities later.

When you newly join the company/project, the first expectation on you is: “Create the automation framework from scratch”: They say- “Take all the manual cases and change to automation. We want everything to be automated”. Have you ever assessed your product? Does the new person know what complexity the product contains? Should the person first have some hands-on experience on the product and understand what to be automated rather than rush to focus on manual to automated test conversion!?

What do you even test!?: “Everything is working fine, there were no bugs logged in regression. Why did you take so much time to do the regression? What do you even test when everything is fine!?” Well, Tester performs a lot of different tests while testing. Just because no bugs were found doesn’t mean the tester isn’t testing. The Tester is not there to introduce new bugs, the Tester is there to identify the risks, if any, in the product. Finding zero bugs therefore in no way implies that you don’t need testers in the team.

We want AI in automation. : With all sorts of marketing, this has become a bigger misunderstanding to resolve than “100% Automation”. This point warrants a future article.

Testers are gatekeepers: This is usually said when one finds a bug in production. Have teams ever considered whether the testers are authorised to stop the release when we call them gatekeepers?

Teams usually just look for a sign-off from a Tester in order to go live, and as someone to blame when bugs are found in production. When most of the time, teams ignore the risks raised by testers stating “Users won’t do this”. The reasons for such a short-sighted position are all the points listed above.

We want 100% automation: Clients believe that they are paying for the automation which in future will save on budget and time to release to the market.

However, the customer rarely knows - or they forget - that there is more to find out by testing which can’t be found via automation.

Point to note here about Automation - we have never explained this to the customer. This requires a lot of effort to explain the focus and boundaries of automation and the idea of and need for exploratory testing.

The Testing team’s company (if outsourced), the team and everyone has to take the responsibility to understand what Automated Tests actually do and then agree on how to cover the review and verification needs.

This should start right from the time when bid discussions begin.

Why do we see a difference between testing as a craft vs what happens in reality?

The reality is culture-based and can’t be changed overnight. The product needs to be shipped every week.

We have to state the importance of critical thinking, analysing power, solution-based mindset, user-based thinking, the craft, the risks.

How do we embed the testing as a craft knowledge while being pragmatic :

This will require support from higher management who understand testing as critical thinking and not as just “100% automation”. In the meantime,

  1. Set aside time for exploring & experimenting with the product.
  2. Then add the notes/bugs/ideas from the exploratory session to the automation suite.
  3. Add the expected analytics/events (UI, workflow) in the user story Acceptance Criteria.
  4. Set aside some time to look at the production logs for any error you can recreate and which the team can fix.
  5. Raise the risk when you see it.
  6. Test the app as a user will test.
  7. Add the user experience as test cases/tests ideas.
  8. Keep a test/verification suite which can’t really be automated but necessary as part of the regression. E.g. mobile app - backgrounding, switching, full RAM, full ROM, specific 3rd party libraries, etc.
  9. See if those error messages make sense when you read it. As a user, will it give you enough indications on what has gone wrong?
  10. Figure out how to be part of requirement discussions and try to prevent the issues before they get developed as per your experience. Yes, this is easier said than done. People and culture are both a larger part of the problem. But if understood and wielded well, they can also be the most effective tools you could use to make good happen.

Conferences are reminders of what ‘Good’ looks like. In the future, we should avoid criticising devs, the limitation of automation etc and start focussing more on scenarios and solutions that would be best suited for the community and for the clients.

Connect / Follow me

I work on everything Quality!