Guest post by Antoine van der Lee, Lead iOS Engineer at WeTransfer.
As a Lead iOS Engineer at WeTransfer, Antoine’s work is focused on code architecture and team processes. He's passionate about contributing to the iOS community where you might know him from his weekly blog posts on his personal blog called SwiftLee. He particularly enjoys speaking on best practices for structuring code architecture in a way that creates sustainability, as well as open sourcing frameworks and how iOS developers can be more successful in their work.
At WeTransfer we're reusing one single primary workflow (read more about it here) which we optimized a lot as it affects all inheriting workflows.
Let's go over a few steps you can take to optimize your workflows:
- Run the minimum
- Enabling caching to speed up
- Optimize installing dependencies
I thought at first that this would be the best improvement I could make.
Run the minimum
Although it might seem obvious this is still something you can easily forget. Each step can be executed based on conditions that allow you to speed up certain workflows by simply doing less.
At WeTransfer we were installing a brew dependency called SwiftLint even though we would not use it for certain workflows like a TestFlight build delivery.
Disabling a step based on certain conditions
Disabling a step can be done by editing your bitrise.yml file. Simply add the run_if key and set it to a boolean value.
The following example only runs if it's running for a pull request.
- [email protected]:
run_if: .IsPR
inputs:
- content: |-
#!/usr/bin/env bash
...
A great list of examples can be found here and it's also worth to check out the Enabling or disabling a Step conditionally page. Here you can learn things like disabling a step or only running a step when the build fails.
Only run tests for changed files
You can adjust your CI setup to only run tests for the changed files. You can go far by actually only running the related test cases but that might not bring up side effects to the code changes you made.
At WeTransfer we run tests for all our in-house developed frameworks. We used to run them all for every PR, even if we only changed a very small thing. This is not efficient and could lead to an unneeded 30 minutes CI build.
Therefore, we've built checks in our Fastlane setup to check whether any files changed within our frameworks. If so, we run the tests for that framework.
To do this, we created a new lane to fetch the changes for a given PR ID:
desc "Get all changed files in the current PR"
lane :changed_files_in_pr do |options|
result = github_api(
server_url: "https://api.github.com",
api_token: ENV["DANGER_GITHUB_API_TOKEN"],
http_method: "GET",
path: "/repos/WeTransfer/Mocker/pulls/#{options[:pr_id]}"
)
baseRef = result[:json]["base"]["ref"]
# As CI fetches only the minimum we need to fetch the remote to make diffing work correctly.
sh 'git config remote.origin.fetch "+refs/heads/*:refs/remotes/origin/*"'
sh "git fetch --no-recurse-submodules --no-tags"
sh "git diff --name-only HEAD origin/#{baseRef}"
end
Then, in our main Fastlane lane we can call this method and build in checks around our framework tests:
changed_files = changed_files_in_pr(pr_id: ENV["BITRISE_PULL_REQUEST"])
# Test Rabbit if changes exist.
if pr_changes_contains_path(changed_files: changed_files, path: "Vendor/Rabbit/")
test_project(scheme: "Rabbit", project: "Vendor/Rabbit/Rabbit.xcodeproj")
end
This lead to some great time improvements in our builds.
Enabling rolling builds
Another great improvement that is not enabled by default is rolling builds with every switch enabled. You can enable this from the App Settings page:
For us, at WeTransfer it wouldn't make any sense to finish a previous build if another one is triggered. This is simply because the last commit should always complete successfully on CI either way. Therefore, enabling all switches makes total sense to us and saves us quite some time.
Enabling caching to speed up
Caching is normally one of the best ways to speed up your workflow. Unnecessarily downloading dependencies every time you run a workflow can cost a lot of time. Bitrise has a great resource for this to get you started. Although this is already great, we still found a few extra improvements you could apply to speed up your workflow.
But before I dive into these, I want to quickly point something out.
Caching does not always improve performance
The following two paragraphs cover the caching of dependencies. I'll be honest with you, I thought at first that this would be the best improvement I could make. In the end, we cache the dependencies so they get installed a lot faster the next time, right?
Well, it turns out that it slowed down our builds.
Without caching:
With caching:
As you can see, the version without caching is a lot faster than the one with caching. This is mainly because the time it takes to push the cache is longer than the amount of time we win from fetching and using the cache.
Looking a bit closer into the caching
Looking a bit closer you can see that installing the GEMs and Brew dependencies is a lot faster with caching.
DependencySeconds fasterInstalling brews:19,6 secInstalling GEMs:21,9 sec
However, pulling and pushing the cache adds an extra 134 seconds. This was the moment for us to dive into the caching a bit more and see what it's caching.
We started by running a few more builds with disabling each caching one by one to see which one is slowing down.
Cache paths enabledTotal execution timeWithout caching3m54s$BITRISE_CACHE_DIR3m48s$BITRISE_CACHE_DIR + Compress3m57s$BITRISE_CACHE_DIR, $GEM_CACHE_PATH4m39s$BITRISE_CACHE_DIR, $BREW_CACHE_PATH4m17s$BITRISE_CACHE_DIR, $GEM_CACHE_PATH, $BREW_CACHE_PATH5m36s
This tells us that the default caching provided by Bitrise is giving us the best result while we did not cache our GEMs and Brews.
Looking at the documentation tells us why: "The cache is downloaded over the internet"
This means that if you store files that are downloaded from a CDN/cloud storage, you might not see any speed improvement, as downloading it from the Bitrise Build Cache storage will probably take about the same time as downloading it from its canonical CDN/cloud storage location.
Therefore, we decided to remove our caching implementation for WeTransfer as for us it did not result in the improvements we were looking for.
So should I never cache my dependencies?
It depends. Bitrise answers this question as well in the "When to store a dependency in Bitrise Build Cache?" section:
Storing a dependency in Bitrise Build Cache might help if you have reliability issues with the resource’s/dependency’s canonical download location. Popular tools/dependencies might get rate limited (for example, PhantomJS). CDN servers might have availability issues, like jCenter/Bintray ... If that’s the case, storing the dependency in Bitrise Build Cache might help you. It might not improve the build time but it definitely can improve reliability.
That's also the reason why I kept the following two paragraphs in this blog post. You have to decide for yourself whether caching the GEMs and Brew packages helps for your project.
Caching GEMs
Searching for "Bitrise cache gems" on Google brings you to this page which at first seems to be a great solution. However, when using Fastlane you can end up seeing the following error message in your logs:
Custom value X is set for GEMHOME environment variable. This can lead to errors as the gem lookup path may not contain GEMHOME.
And it did. Fastlane failed to run as the GEMs could not be found.
To make it work we ended up using the official Bitrise documentation on Caching Ruby Gems which worked great.
Caching Brew packages
Caching Brew packages at first seemed quite easy. An official brew step is available to use which also supports caching. However, caching didn't seem to optimize this step by a lot and the step would also unnecessary call brew update.
In the end, we implemented our custom script to install brews using a Brewfile. The step looks as followed:
- [email protected]:
run_if: ".IsPR"
inputs:
- content: |-
#!/usr/bin/env bash
set -euxo pipefail
BREW_CACHE_DIR="`brew --cache`"
echo "Brew cache directory: $BREW_CACHE_DIR"
envman add --key BREW_CACHE_PATH --value $BREW_CACHE_DIR
brew bundle check || brew bundle
title: Install brews
First, we get the cache directory using brew --cache which we then save into an environment variable with key BREW_CACHE_PATH.
Then we do a brew bundle check and only a brew bundle if needed.
The BREW_CACHE_PATH is added to our Bitrise.io Cache:Push cache paths which will make the path run a lot quicker using its cache.
Enable caching for Pull Requests
The Bitrise.io Cache:Push step by default only runs for non-PR builds as described:
It won't update/upload the cache if the cache did not change, nor in the case of Pull Request builds (unless you change the run_if property of the Step).
However, it's perfectly possible to cache your dependencies for PRs and speed up those builds. For us, the final caching step looks as followed:
- [email protected]:
run_if: true
inputs:
- cache_paths: |-
$BITRISE_CACHE_DIR
$GEM_CACHE_PATH
$BREW_CACHE_PATH
description: Runs Fastlane test on the target PR
This will reuse your cache even for pull requests and can easily save minutes per build.
Conclusion
As you can see you can do quite a lot to improve your workflows. Take some time to optimize and save time for the long term.
Feel free to reach out to me on Twitter if you have any feedback or questions. You can follow my personal blog SwiftLee for more workflow and Swift related blog posts.
Thanks!