In a survey of over 3,000 mobile users, 80% indicated they would only attempt to use a problematic app three times or less before uninstalling it. Today's users are quick to abandon mobile apps that don't provide optimal performance, and rightfully so. When there are alternatives in every app category, a competitor is just a download away. That's why rigorous testing before pushing mobile apps or features to production is essential.
However, the reality is mobile testing is hard, and mistakes can undermine testing efforts and lead to subpar outputs. Here are four common mistakes teams make when testing mobile apps and how to avoid them.
1. Lacking a clear mobile testing strategy
If you plan your day with a to-do list, you're likely to be more productive — the same goes for testing mobile apps. Lack of a clearly outlined mobile testing strategy is the number one reason most mobile app testing efforts do not yield successful results.
Sometimes teams approach mobile testing with the same processes and strategies used for testing websites or web apps. This shouldn't be the case as the mobile development ecosystem has unique factors (such as device fragmentation, different UX experience, localization, and different user behavior) that should all be accounted for in the testing strategy.
Before you embark on testing a mobile app, draw up a plan that addresses the following key decisions:
Testing scope: A testing scope includes, but is not limited to:
- Features, scenarios, or components that will be tested.
- Features, scenarios, or components that won’t be tested and can be considered an acceptable risk.
- How testing will be conducted.
- The overall objective of the testing effort.
Device configurations to test on: Due to the fragmented nature of mobile, your app may be used on a variety of mobile hardware and OS variants. It would be futile to attempt to test your app on all the possible combinations, which is why you should prioritize configurations to test on. This decision should be made based on devices that are used by your target audience and on market share research. Testilo finds that leading app providers test on a minimum of 24 unique devices + OS combinations.
What tools to use: The tools you use can either increase your efficiency or slow you down. When choosing the tools to use, consider:
- Your team's skills. For example, if your testers are not familiar with writing code, choosing a heavily code-dependent tool might not be the best option.
- Your budget. Can your team afford to pay for the tool in the long term?
- The tools fit to the specific problems you need to solve.
- Availability of good documentation, customer support, and community.
- Ease of integration with other tools in your stack.
Test Approach: As part of your testing strategy, it’s important Define the testing process, level of testing, roles, and responsibilities of every team member.
Risk Analysis: Make a list of all the risks you envision. Prepare a plan to mitigate these risks, as well as a contingency plan in case you encounter these risks in the real world.
Test Environments: How will tests environments be set up, and what extra steps will be taken to ensure that they stay stable? if test environments are not stable (Staging, QA, or Pre-production), you can’t run tests, and this will affect the team
Lack of a clearly outlined mobile testing strategy is the number one reason most mobile app testing efforts do not yield successful results.
2. Ignoring flaky tests
A fundamental principle of automated tests is determinism, in that if the code hasn't changed, the outputs shouldn't either. However, it'll get to a point where one or more of your tests begin to fail intermittently without any code change or any apparent reason. These tests are called flaky tests.
Flaky tests can happen due to a variety of reasons (environmental differences, invalid assumptions about the state of test data or system, networking failures, instability) and are pretty common in the software industry. According to a survey published in 2021, even large firms like Google and Microsoft are not immune to this — approximately 41% and 26% of their tests, respectively, were revealed to be flaky.
Flaky tests within a test suite are perilous because team members will lose faith in its efficacy, start disregarding its warnings, or even turn it off completely. So, in the event the test genuinely fails, team members are likely to disregard it under the assumption that it's failing for no apparent reason as usual and may end up accidentally pushing a bug to production.
A test we don't trust is much more dangerous than a non-existent test.
To avoid letting flaky tests drags you to a point where your test suite becomes a bother:
- Prioritize fixing broken tests over writing new ones: When tests break, the common reaction from teams is, "Oh, I'll fix that next week when I have some time." But then next week becomes next month, which becomes three months, and then six months. Meanwhile, more flaky tests are popping up. Eventually, it'll get to a point where the entire test suite becomes a maintenance nightmare. The longer you ignore a broken test, the harder it becomes to fix. Fixing a flaky test introduced recently is significantly easier than fixing the same test six months later. You'll likely have forgotten the details of the test case and will need to spend hours regaining that context.
- Share the responsibility of tests between the developer and the QA or test engineers: Clearly define the owner for each test and who should follow up with them about the fixes.
- Delete tests that are no longer useful: Once you notice a test is not useful, immediately delete it from the codebase. This will improve your test suite’s maintenance.
A test we don't trust is much more dangerous than a non-existent test.
3. Writing poor-quality tests
Another mistake teams make with automated testing is failing to follow best practices and patterns when writing test scripts, leading to maintenance issues later on. It’s just as crucial to follow best practices and patterns while writing tests as it is when writing production code.
Very often, the person who writes the test is not always going to be available to fix the test if it breaks/changes. Following best practices aids in easier maintenance and fixing.
Here are some best practices to keep in mind as you write test scripts:
- Each test should have a single purpose: Tests should be very 'atomic,' extremely small, and only test one thing. This way, anybody can quickly look at a test and understand what it's validating/testing with very little contextual knowledge.
- Each test should have a unique descriptive name: Giving your tests descriptive names helps in debugging. Whenever a particular test fails, you'll know just from the name of the test what rule that test was checking.
- Each test should be independent: Having tests that are dependent on each other or chained together means that if one test fails, the others will fail too. Dependent tests have more potential to be "flaky" or "fragile." While having no dependent tests is not fully possible, it should be the default aim.
4. Favoring either device, emulator, or simulator testing
Mobile device testing can be done with emulators, simulators, or real devices. The problem is sometimes teams rely too heavily on one of these to the exclusion of others. By relying on one solution, you may miss key information about performance, quality, and user experience.
Emulators and simulators are assets for fast mobile testing with unavailable devices. However, they don't always encounter the same environmental factors as an actual device and so aren't always as accurate.
For example, simulators cannot provide information on battery usage or cellular interrupts, because they do not imitate underlying hardware. On the other hand, emulators won't tell you how your mobile app performs in terms of bandwidth and connectivity. And, an over-reliance on physical devices is costly and does not accurately predict app performance as load increases.
You need a healthy mix of real device, emulation, and simulation strategies to achieve optimal results:
- Use simulators during the initial coding phases since they offer better debugging facilities.
- Use an emulator with a load testing tool to generate loads for stress testing your app.
- Use emulators for functional and regression testing
- Use actual devices on real networks to test your app's mobile experience.
Listen to our podcast with test automation expert Angie Jones for more tips on getting started:
Avoid mobile app testing mistakes with Bitrise
The tools you choose can either make or break your testing efforts. Your CI/CD tool is particularly important because it’s what facilitates continuous automated testing. Since you're building for mobile, a CI/CD tool built specifically for mobile is preferable. That's where Bitrise comes in.
Bitrise offers seamless integration with most of the popular test automation tools and frameworks for mobile, such as Perfecto, TestProject, and Applitools. It also features over 330+ Steps critical to mobile development workflows to help you move faster and be more productive.
Request an ROI demo today to learn how Bitrise can help you optimize your mobile app testing strategy.
You can read more about testing mobile apps in:
- 5 tips and tricks for mobile testing
- Solving flaky tests by making use of Xcode on virtual machines
- The 4 main factors to consider as part of your test automation strategy
- AI and Machine Learning: how are they changing the mobile testing landscape?
- Snapshot testing in iOS: testing the UI and beyond
- Testing on the CI
- Getting started with testing Jetpack Compose
- What is unit testing in mobile development
- Codeless automation testing tools for mobile: An overview and 10 tools to get you started