Going live with your app ... and then?
The mobile app market is drastically changing and everyone focuses on being noticed. After the app is launched, you have to be ready for continuous optimisation and improvement. Hence, highly frequent testing.
The QA role doesn’t stop once the app is live. Instead, the responsibilities continue in terms of regression and automation testing, signing off hotfixes and new releases - that too quicker, as customers don’t like to wait!
In the Pre-Launch phase, the business does the following :
- Analysis of apps & competitors offerings already available in your area.
- Review of competitors app – good stuff, mistakes, improvements, issues faced, etc.
- Testing of the app on all platforms before launch – Beta testing with actual/internal users.
- Having a proper crash reporting/ analytics for the app.
- Promoting the app on your web channels.
- Social media connects.
- Research for the latest technologies and support required.
The QA’s role in the Pre-launch phase:
- Actual device testing in Test Flight / closed track ( Beta Testing tools for iOS & Android respectively)
- Testing in lower environments with cloud device farm.
- Ensure that feedback from beta testers has been addressed.
- Real-world scenario testing with slow networks, interruptions, modified OS, limited RAM.
- Automation status as green.
- Regression / Exploratory is green.
- Making sure crashes are rare.
- Ensure that that Crash Analytics is configured correctly and that we can get the desired logs.
Post Launch, management’s main areas of focus are: looking at the analytics, downloads, reviews and ratings. Not to forget the feedback and incident reports.
The actual thrill starts post-release - the feedbacks, reviews, ratings, analytics, crashes, new features and whatnot.
As a team, one needs to find the right balance between doing a hotfix and waiting until the next upcoming release.
Post Launch time for a QA:
This is when a QA is required to give a quicker Sign-off!
(I know QA does testing only pre or post-launch. There is more to it than just the word ‘Testing’… Read on.)
Known Issues & Store ratings:
Reviews & Ratings are The way to attract more customers, or to turn them away!! Potential customers may not even install the app if they see poor ratings and comments. One should not take any risk here!
Consider a scenario, where the QA had raised a bug, and the team deferred it. And post-launch, a customer put in review ratings on store giving app 1 star for this issue.
This is the time the team realises: The longer you wait to fix a problem, the more expensive it is to correct.
The known issues (defects found in pre-launch ) should be well analysed before deciding to fix later. One cannot blame everything on tight timelines. It’s better to release the app one day late rather than receiving lower ratings from multiple customers for 1 known issue! Makes sense?
The positive part here is, you already know the steps and causes for the issue. So it’s quick to recreate, retest, do the quick regression and a sign off for these kinds of issues.
HotFix or a Release :
It all depends on the app’s reviews and post-launch plan. Any issue that a user reports should be added to the backlog for prioritisation.
-As a team, pick up the bugs or known issues from release notes which are critical to users & business for inclusion in a hotfix (Which the customers are yet to discover or which were turned off so that the customers are not impacted) Fixing a mobile bug is not fast and cheap. You have to go through the complete pre-launch process (except the beta testing timeline for hotfixes - in my view).
There are also App review and submission waiting times different for each platform - The review time for the Google and Apple platforms are different. The first time and the future review times too are different.
So, see if you can add this in the next release unless it’s critical for business and users. Do not hesitate to put your voice in decision making.
Testing the Crash Analytics again:
The analytics gives you a detailed idea of where and when did crash happened, the device model, the network, the OS and not to forget the actual journey in terms of events so that you can recreate the crash on the testing device.
-Validating the changes in Analytics if we’re getting enough information about the crash/exception. (As production analytics reports could vary depending on user consent).
-Verify any new crashes being seen, which you didn’t encounter in pre-launch testing. Some scenarios happen only in the real world.
Analyse the Analytics/Crashlytics, for the events, errors, backend failures, number of times crashes happen to several users.
New feature Launch / Release:
-Remember, that you will have to go through the pre-launch process again – testing in lower environments, Test flight/ Closed track etc..
-Validating the app if there is going to be a new OS release/update after launch and before the next release to the App Store.
-The new feature testing from the backlog. The priority may change depending on the user feedback.
-The accessibility testing of all pages/models when a new feature is introduced considering the various navigation flows.
Make sure your product, UX, BA and QA teams all conduct user interface testing for all the new features.
-Making sure that automation scripts contain the test cases for old & new fixes and new feature test cases.
-Automation suite is run and is green for all hotfixes and new features. In short, for every new release
-Regression & Exploratory testing is green.
-Validating earlier and any new crashes which are more general or happening to most of the customer base.
-Testing hotfixes on real devices
-Rerunning the accessibility for any newly added button, page, section, link, heading and text.
Keep ensuring that the customer has a positive experience when they use the mobile app! Have a good app go live!