Manual testing in mobile app development: is it still relevant?

Humans love automation. Anything that can make our lives a little bit easier is always a welcome idea. But sooner or later, some of us start to overestimate its capabilities. Just like the recurring conversations on robots taking our jobs and AI making software developers obsolete, there's also been much talk about how automated testing will completely replace manual testing. Why bother with manual tests every time when you can just write automated tests once? After all, automated tests are faster and produce even better results...right?

Wrong! Manual testing isn’t going to fade away. Yes, automated tests help save time by running a large number of tests in a short time. Yet, certain tests are still best done manually as trying to automate them is difficult and not cost-effective. The discussion, therefore, shouldn't be about automated testing vs. manual testing. Instead, it should be about how to utilize and balance both in a way suited to your organization's size and development cycle demands to maximize return on investment.

Manual testing vs. automated testing: the basics

Manual testing entails a human using the mobile app either on a physical device or a simulator to see how it responds to different situations or if it meets certain requirements. Manual tests are best suited to offer insights into the user interface or user experience-related problems. This is because humans possess better analytical and creative abilities and are able to adopt the mindset of the end-user to tell when something is “off.” It’s still the most common and even the first type of test performed across teams because it’s easy to perform and doesn't require any special knowledge or tools.

Automated testing, on the other hand, is running pre-scripted tests written with the aid of testing frameworks like Appium, Espresso or XCUITest to inspect and validate for expected results. In many teams, these automated tests are integrated into the CI/CD pipeline so when a developer pushes a commit, the tests first verify that everything still works fine before a pull request (PR) can be merged. Automated tests are initially time-consuming to write, but afterward, you can execute them continuously and simultaneously which increases the return on investment (ROI). 

Where manual testing shines

As mentioned, manual testing offers the best results when you need to test the look and feel (otherwise known as user experience or UX) of an app. This kind of UX testing answers questions like: Is a button big enough? Is there enough whitespace? Is the pagination transition animation smooth enough?

It’s arguable these kinds of tests can still be automated. However, writing automated test scripts that would correspond to the behavior of a human tester might take weeks and still won’t produce the kind of comprehensive and qualitative feedback a human will. For example, during manual testing, a human can tell if the app navigation is intuitive enough or if the signup process is overwhelming. These are both things an automated test cannot reveal.

Humans possess unique insights that computers — no matter how intelligent — cannot offer (at least, not yet). So, if you’re testing for something that is an inherently human-perceived quality, like qualitative analysis, you need humans in the picture. For this reason, manual tests are more suited for tests like:

  • Usability tests: Usability tests are usually used to analyze the user-friendliness, like the mobile application is easy to understand, ease of access, effective navigation, and error reactions. Usability tests should be done by real users to get real-world feedback. Feedbacks and learnings can then be used to improve and work upon the user experience
  • Exploratory tests: Exploratory tests are usually carried out with fairly new features to investigate and learn how the feature reacts to unexpected or inconvenient circumstances, like leaving and reentering wi-fi, simultaneously running other apps, or changing device permissions. Automated testing can’t uncover errors it wasn’t programmed to find. Hence, it’s from tests like this that testers come up with new test cases that can be automated for similar and future scenarios.

Where manual testing stalls

As intuitive as humans may be compared to computers, we’re not so good at tasks that require repetitive or routine efforts — for example, carrying out regression tests that confirm that a commit (bug fix or code change) did not negatively affect existing features. The human error factor will creep in.

Manual testing is also not great for tests that deal with underlying systems and infrastructure, like validating API calls and structure. After all, systems will best understand other systems. If a test is straightforward, repetitive, doesn't require special human attention, or is a common workflow like logging in, then it should be automated to save time.

Examples are:

  • Performance tests: Performance tests compute the speed of an app in terms of things like ‘time to page load,’ ‘time to first render,’ or ‘search results response time.’
  • Compatibility Testing: Compatibility testing ensures that your mobile app works as intended across all mobile devices and OS versions you aim to support.
  • Functional tests: These are tests that test the validity of functions or methods in the codebase, such as unit or integration tests. They assert that the expected input to a function matches the expected output or behaves as expected. This is especially useful for code that performs sensitive calculations or interfaces with third-party tools.

Finding a balance: How much should you rely on manual testing?

One of the major factors that determines how much a team may rely on manual testing is stability. For example if you’re in the early stages of building a minimum viable product (MVP) or an experimental feature, it may make sense to hold off on writing automated tests, until the product/feature has a more concrete direction. This is because at that stage, workflows are likely to be constantly updated based on feedback from sprints and tests. Every change to a workflow will warrant a rewrite to the corresponding automated test. This can easily become frustrating and may slow work down.

However, as the feature or prototype starts to grow, or gain some stability, the feasibility of manual testing being a large part of the testing strategy becomes unrealistic. At this stage, there will be more ground to cover. Repeatedly testing all processes manually will be prone to errors, and will also slow down the team. 

In general, in deciding how to approach automated and manual testing, our suggestion is to manually test the hard-to-automate cases that remain constant as the app grows in complexity, and automate everything else. Otherwise, as your app grows, your manual testing effort grows with that, which means your app releases become harder and harder as the app grows in complexity, which has to be avoided.

Larger teams should also go a step further and consider automating UI tests to save more time. For example, if you have a complex app with, say, hundreds of screens and need to support hundreds of different devices with different operating system versions, this will help. Though it may be initially time-consuming to set up those tests, it’s worth it in the long run and significantly cuts down on testing time.

You can use cloud device farms like AWS Device Farms or Firebase Test Lab or an automation testing platform like Perfecto or pCloudy. These tools help you test your app across a wide range of devices and take screen recordings or snapshots a human can manually review later on to ensure visual consistency. The good news is if you’re using Bitrise as your CI/CD tool, we have out-of-the-box Steps for all these tools, many of which are Verified Steps created by our partners.

Manual testing isn’t going anywhere

Automation is very important in mobile app testing, but it is simply one component of the solution. In the fast-changing code environment that mobile development projects are accustomed to, test automation should be balanced with the flexibility provided by manual testing.

Automated testing is great for the continual repetitive checking process, but it’s just not enough. Combined with manual testing, these two provide the highest chance of achieving full product coverage.

Further Reading:

We’ve written a few insightful pieces that may be of help to you as you start to consider automated tests.


Get Started for free

Start building now, choose a plan later.

Sign Up

Get started for free

Start building now, choose a plan later.