In the previous article, I introduced how to set up testing on the CI and what you can do to make your life easier by extracting the required information to debug tests when something goes wrong. Now I will dig deeper into the latter topic and focus on emulator and UI test-related issues.
Mitigation strategies
When I claim that instrumented tests (especially UI tests) are flaky, I'm pretty sure everyone can recall an event from their own experience when it happened. Or if not, you are probably really lucky, but at least you heard that someone knows a person who experienced it. Sadly, there are no silver bullets to prevent flakiness, but as with everything, there are quite a few things we can do to prevent most of them. So here are the 3 groups of strategies:
- Preventive
- Reactive
- Supportive
Let’s see each.
Preventive test flakiness mitigation
As the name suggests, these strategies involve all the things that you can do before launching your tests, and enables you to prevent the flaky behavior from happening. This sounds rather vague, but I will try to give more examples and details now.
Page Object Pattern
Are you familiar with the term “coding patterns”? Do you know and use some of them? If you answered with yes for both questions, then I guess you won’t be surprised to hear that testing also has it’s patterns and Page Object Pattern is one of them.
This pattern comes useful when you are doing UI tests, for example with Espresso or UIAutomator. Originally, this pattern was used by web UI testers, so this is where the name “Page Object” comes from. Although you would assume from the name, that you can create Page Objects for pages or screens only; but in fact, they can be used for smaller items as well (e.g. fragments or different views). The goal of the Page Object Pattern is to introduce a decoupling layer between test cases and the code required to do an action/access element on the UI. This will be the Page Object. For the visual thinkers, here is an overview image:

As you can see:
- You can run multiple test classes on a single device
- A given test class can use one or more UI component (e.g. Activity, Fragment)
- Each UI component will have one page object
- A test case can use one or more page objects
Putting this in practice:
Let’s imagine you have UI tests of a simple TODO application, which lists some TODO items, you can add, edit and delete them. From a testing perspective you would not care about the resource names of the given buttons and views, because it is highly likely that you would test different behaviours instead. So if you put all of these code in the page object, test cases can become less boilerplate and shorter. Also, if some behaviour in the UI changes, like for example previously you had a button for each item to delete them, and that changes to swipe them out to delete, you would need to update every test case. You should put this logic in the page objects, and if you have to change, you just have to update it in one place. To stick with this example, there should be a method something like “deleteItem” in the page object. So to summarise, using page object pattern gives you the following advantages:
- Avoid code duplications, as all accessors, actions, etc can be in a single place
- More readable code
- You have to change the code in a single place in case of behaviour change
- You can combine it with other patterns/techniques, for example it becomes more compact if you combine it with fluent API
Now, example time:
Note: as you see, I use the word “screen” in the naming, instead of “page. The reason is that it is more descriptive for mobile user interfaces, page is more understandable in webpage or desktop terms.
The above example shows a simple UiAutomator test case, where we do the following:
- Starts the application with the IndexActivity
- Launch MainActivity in IndexActivityScreen with the method launchUiTests()
- MainActivity displays a fragment (ParentFragment)
- ParentFragment displays an another fragment (ChildFragment)
Thanks to the page object pattern, the code for this is quite short and readable. As you can see, I combined the page object pattern with a fluent API, to make it more concise.
Looks nice, but how will this help in preventing test flakiness?

Probably you experienced flaky UI tests in the past, because someone from your team forgot to add some waiting time to a given view element to appear. It happens from time to time, because we are humans and make mistakes/forget things. The good thing is that if you use page object pattern, you will write the waiting logic once, and you reuse it in the different test cases, because it is in your screen object. So the less time you have to add the waiting logic, the less chance you will forget to add it. Also, readable code helps to avoid human errors, right?
You might argue that page object pattern is not the only thing that would solve/mitigate the above mentioned issues, and you are right. My point here is that it is a great choice for mitigating the issues, because it solves these along with it’s benefits.
I will provide an example for UiAutomator. This how my screen class looks for the IndexActivity:
As you see, we have:
- The UI elements we interact with as a member variable
- The available interactions as member methods
- And a method named waitTillLoad(), which will make sure all the required elements are shown on the screen, when we start interacting with them
And just to have the full picture, have a look at the parent class for this IndexActivityScreen class:
Things to see from the above code:
- interactions like click/find have to be written once
- constructor of the parent class will call the waitTillLoad() method, so all subclasses will do the same with calling super(uiDevice) from their constructors
- waitTillLoad() is abstract, every non abstract subclass will have to define it, so it is less likely you will forget it
- timeouts, retries are unified, they are defined in one place
Takeover
Write your tests in a readable, compact and expressive manner, it will save you a lot of headaches. Page object pattern is a good candidate for this.
Reactive test flakiness mitigation
Continuing the discussion on UI test flakiness mitigation, reactive strategies involve all the things that you can do while running your tests, and enables you to prevent the flaky behaviour from happening.
Dealing with System events
System events can happen during your test runs, one of them is when Android is not responding (ANR for short), and you get a dialog about it. I quite often see them when I fire up an emulator with API level 30.

Ever happened that your UI test failed because Android threw a system dialog? If the answer is yes, I bet you felt similarly like in the image below.

Even if your app is super fast and responsive, it can happen that on CI such dialogs will appear, because as we learned in previous posts, the performance of the CI machines are limited. Obviously, you have to do something with those dialogs.
If there are no other requirements, in most cases you can close them, either with clicking on the “Wait” or the “Close app” button. It is a bit easier to go with the first one, because as I said, even your app can be one that is not responding.
So what would be the solution? Create a watcher that will watch for these dialogs during your test runs. Sounds reasonable. I have good and bad news. Let's start with the bad: there is no such thing in Espresso. The good news is that there is a watcher in UiAutomator, and this will help you even if you have Espresso tests.
Here is an example of how to do it. As I said we will need a watcher for this, UiAutomator has UiWatcher. The steps to achieve this:
- Create a method for registering the watcher
So the dialog will show something like “<application> isn’t responding”, we should search for that text. Just to be sure we are not picking something up from a different application which has the same text, filter the package to “android”.
- Check if there is an ANR dialog
Simple step, not much to explain.
- Click on the “Wait” button, when itt appears
As you see, in case we have an ANR dialog, we click on the “Wait” button to make it disappear. You can do additional things, for example I log which application had the ANR, it might be interesting information if I check the logs.
- Register the watcher
BeforeClass annotated methods are a perfect spot for it.
This will save you a lot of headache. And as I promised, I will tell you what you should do, when you have Espresso tests. Well, I am certain that there are different approaches as well, but nothing is preventing you from using this very same code, you just have to depend on UiAutomator and this code for your tests. A good approach is to create an abstract parent test class that contains this, and your tests can extend that class. This way you will have no code duplication, actual test classes do not have to contain this code, so it will look nearly the same, it just saves you from those nasty ANR dialogs. Based on this, you can write your own watcher for other cases as well, hurray!
Takeover
Close those ANR dialogs and other unnecessary stuff during your test runs with UiWatchers.
Supportive test flakiness mitigation
The last group on UI test flakiness mitigation, supportive strategies involve all the things that you can do to help reduce the flakiness of your tests, but it does not prevent the flaky behaviour from happening.
Better hardware
One trivial thing you can do if you have a flaky test because of timeouts, you can buy/rent faster hardware, which you maybe can and want to do or not. Of course this will help, but as I said, will not prevent it, because the amount of performance that the tests can require has no upper limit (imagine having the latest/fastest machine, and launching 10 or more emulators simultaneously, I wouldn’t place high bets on not having some performance issues).

Screenshotting UI test events
In some cases it could be a big help, if you would have a screenshot of the device, when a given UI test case fails, to have a better understanding of what caused it. For example, taking screenshots helped me to discover, sometimes ANR dialogs appear during my UI test runs, causing intermittent test failures. And of course, this comes handy for not intermittent/flaky failures.

For taking screenshots, you have to create a TestWatcher and implement the actions for the given events you want. For the complete set of events that a TestWatcher has, please check it’s documentation. I will show you how to take a screenshot when the test is going to be started, and failed:
- (Optional): create a test rule for getting the name of the currently active test case.
This will come handy when creating those screenshot files, and later it will be easier for you to match screenshots with test cases. As easy way to do this is to use TestName.
- Create your TestWatcher
Pro tip:some logs in the logcat output will be also helpful. The only thing that requires explanation is TestEvent enum. It is an inner enum class that I created, and I use it to indicate which test event triggered the screenshot taking. You can use different classes for this purpose, if you would not like to create your own.
- Add a method for creating the screenshots
As you see,you can do it with the ScreenShot API, or with the UiAutomation.takeScreenshot() method. I leave this choice to the reader.
- Add method for creating a good name for the screenshot
It is rather subjective what is good for the developers taste, in my example I concat the timestamp with the name of the test case and the given test event (e.g. 20210520_120000_myUiTest_FAILURE)
- The last, and probably biggest step we need is to save it to the device
As you can see, I added some logs here too, and I use the good old ContentResolver in the process of storing the screenshots. Due to limitations and updates from API level 29, we need a different approach for them. Here you can check my examples:
I would not go into deep details, as it is a different topic, and my article would never have an end, but maybe I will update this article with a link, if I decide to write about data storing on Android in the future. The only thing you need to know, that this will store the screenshots under the sdcard/Pictures/UiTestScreenshots/ directory.
Just do not forget to grant the write permission of your app below API level 29. The easiest way is to use GrantPermissionrule.
Takeover
Create screenshots from different test events, it can help you in discovering what was the issue that caused the failure.
Dump view hierarchy
Creating screenshots is extremely helpful in some cases, but we would not be developers if we would not want more near code details. There can be similarly looking views, specially during transitions, so some resource names and ID would mean the world in some cases, when we debug a test failure. As the title says, we can also dump the view hierarchy to a file. Here is how you can do it.
- Create your TestWatcher (see Screenshotting Ui test events)
- Create a method for dumping the view hierarchy and storing it on the device
Please note, for Espresso you can use TreeIterables to create the view hierarchy. Similarly like in the case of taking screenshots, we have to use a different approach for storing the files from API level 29.
- As seen in the code above, these methods will store the view hierarchy files under the folder /sdcard/Download/UiTestHierrchy/. Please note that some helper functions were already introduced in the Taking Screenshots section, you can find them there.
Takeaway
You can create view hierarchy dumps in different stages of your UI tests, which can help you in debugging test failures.
Pulling the saved data from the device
When you work on your local machine, you can easily view the collected test data by using the Device File Explorer in Android Studio. Just search for the given file and open it with a double click, or you can even download it to your machine, just right click and left click on “Save As”.

It is much more tricky on the CI, because it might not store the data of the given virtual device. For example this is the case in Bitrise. We have to pull those files from the device, before the build finishes and upload it somewhere. Luckily with a simple Script step, we can do it easily, just add the following code to your bitrise.yml, when the collected data is ready:
As you see, I am printing out the list of files for debug purposes, before pulling them to the build/reports directory in the given application. Now we just have to upload it somewhere, where we can find it. A previous example I have shown in the Testing in the CI article, upload it with Deploy to Bitrise.io step.
Takeover
Pull the collected data with the adb pull command and store it somewhere.
Check your device’s health
Sometimes virtual device launches result in a failure. They can even die during your build run, and it can leave you clueless, why your test run ended in failure. A simple and helpful trick is to check the health of your devices with adb devices command. For example if you receive “offline” as a result, you should know that your device was not responding or was not connected at that time. Please see documentation for details.
In Bitrise, just add a Script step to do the device health check.
Note: I prefer listing out the installed SdkManager packages, it helps when a strange issue happens, and it is a known issue from Google’s side.
Summary
You can do things before, during and after running the tests to help to mitigate flakiness issues. Hope you liked my article, let me know your thoughts and questions.