Implementing automated mobile UI testing

Sam Berry
By Sam Berry under Engineering 03 November 2014

In the testing industry, we always find ourselves in the middle of a 'should we / shouldn’t we?' battle when it comes to test automation and choosing the best framework that fits the needs of a software development team.

Combine that with the number of solutions, ideas and continuously evolving frameworks, and this notion of one-size-fits-all in the mobile space seems to be pretty unrealistic – especially given both iOS and Android platforms bring their own diverse range of testing complexities.

But for the testing team at The App Business, and the company as a whole, we prefer to get ‘hands on’ and figure out solutions by ‘doing’ – so we decided to run an automated mobile user interface (UI) testing pilot.

The pilot background

We know that we can’t test everything and we certainly can’t automate everything. However, as part of the evolving testing strategy at The App Business, it’s important we’re always figuring out the next steps to improve quality. For us today, this is about automated user interface (UI) testing. This helps us in a number of ways, including quicker feedback, lowering the time of running regressions tests and greater cost efficiency.

UI automation itself is relatively mature in the world of web development, where WebDriver has become an industry standard but it is still in its infancy for mobile development. There are a number of emerging frameworks such as Apple’s Instruments Automation framework, Appium and Calabash among many. The question is, which work for us?

Historically at TAB, test automation has been solid on many backend projects but UI automation has been limited until the recent start of a new project. This new project relied heavily on matching various states of the app’s behaviour against our servers, and testing various different data sets. It was on this basis that we decided automated UI testing could be very valuable, and a good starting point for a pilot.

This post outlines our design philosophy and initial implementation for the automated UI test suite, and how we plan to move forward with our findings in the future.

The design strategy

Our focus was to build lean UI tests that exercise the functionality of the app, rather than layout and styles. Ideally, we should comfortably re-skin and alter layouts without the automated UI tests breaking – hence the tests should only ever need to be edited if the actual user journeys change. Ultimately, the tests would help developers make quick changes to the app without worrying about affecting existing features.

To try and make the test suite lean, we made sure our automated UI tests did not duplicate anything already covered by the unit tests written by our developers. For example, testing field validation on a login page would require unit testing the upper bound, lower bound, invalid characters and whatever else your validation requires. The UI automation simply performed the ‘happy path’ scenarios for logging into the app, as well as a test for each error message.

We also wanted our automated UI tests to be able to check changes made by developers. With this in mind, the tests would be set up to run on our continuous build server. Running the entire suite after every commit would take a long time to run so we tagged tests with as either main or smoke. Smoke tests are run after every commit and the main suite is run overnight.

The framework implementation

With our design strategy in mind, the next step was to find an automation framework that would fit with the way we work. At TAB we have a strong focus on behaviour driven development (BDD) and the notion of ‘test first, code later’. As a team, we collaboratively write feature files, using Gherkin syntax – it therefore made sense to use a framework that easily complimented BDD. Calabash being one of the industry’s well-known and well-used frameworks seemed like a good candidate to start with, especially as it is run from Cucumber by default.

Next, we needed to be able to set up test data in the backend architecture for the various tests we were going to run. The project currently uses a Rails server, where we added the Calabash files into our Rails app so that it could start up the backend on initiation of running the tests. This allowed us to truncate the database and set it up with new test data before running each scenario. To set up the iOS app to work with Calabash, we created a separate profile that compiled the app with Calabash’s files. Rather than pointing to the development server, this profile pointed to the Rails app that was spun up locally.

For the actual app automation, we extended Calabash’s page object class to include additional functionality – such as the ability to define an element as static or dynamic, and assert every static element on the page that was present. To identify elements, we assigned an ID to each element we wanted to be able to interact with. The IDs of each element follow a format where they start with the name of the page, followed by an underscore and then a brief description. So a label containing a user’s name on an account page, for example, would be AccountPage_UserName. This allows us to change the language, layout and even element type of an object without needing to modify any of the tests.

Next steps with UI automation

I believe this philosophy and implementation gives us a suite that is easy to maintain, doesn’t break when layouts and minor design changes are made, and is tied to the features we develop.

However, as with any new framework and software implementation, continuous improvement and innovation is key and there are many different directions we can take this basic framework.

So stay tuned and watch this space!