Automatic for the People

UI Automation is a iOS framework, introduced in 2010, that (not surprisingly) lets you set up automated tests for your Cocoa application user interface.

It’s also something, as several listeners pointed out, that I completely neglected to mention in my recent podcast episode on automated tests, because I hadn’t heard about it before myself.

Gadzooks! Mea culpa!

To find out more, I had a look at the Automated UI Testing section of the Instruments User Guide, and I watched the 2010 WWDC session “Automating User Interface Testing with Instruments” (requires ADC account).1

As it says in the WWDC video, UI Automation wasn’t built for me. It was made for QA automation engineers. (Specfically, Apple’s own QA automation engineers.) So it doesn’t make use of the compiler infrastructure like Xcode’s unit tests do. Instead of writing your tests in Objective-C, you write them in JavaScript, which takes a bit of getting used to when you haven’t written any JavaScript in over 10 years.2

I’ve tried it out now. There’s a lot to like. But there are some gotchas, timing issues still crop up, and in the bigger picture, I have the same doubts about them as I do about regular unit tests.

Getting Started
I used an Xcode project created from the template Master-Detail Application, with Core Data thrown in for good measure. If you remember, that project has a table with a plus button, which, when pushed, adds a row with the current time:

iOS table with several rows and plus button

There’s no minus button, but when you slide leftward on a row with your finger, a Delete button appears which lets you delete that row:

iOS table with several rows, one of which has Delete button

So I put together two UI Automation tests for those bits of functionality.

You access the controls/views/etc. of your application through a UI accessibility element hierarchy. I had been afraid that this accessibility layer might diverge from the actual Cocoa controls in important ways, but they seem to be the same, at least for my two simple tests.

The Tests Themselves
For the “add row” test, I needed to add an accessibility label to the plus button, which wasn’t there in the original Objective-C template code:

addButton.accessibilityLabel = @"Add Entry";

In my JavaScript code, I access that button through its accessibility label, and tap it:

var addButton = UIATarget.localTarget().frontMostApp().navigationBar().buttons()["Add Entry"]
addButton.tap()

You never need to work in screen or view coordinates of any sort, which is a relief. If you don’t want to find the element by its accessibility label, you can also do so through its subview index.

For the “delete row” test, I access the last row of the table:

var tableView = UIATarget.localTarget().frontMostApp().mainWindow().tableViews()[0]
var lastRow = tableView.cells()[tableView.cells().length -1]

In addition to tapping and a bunch of other useful actions, there’s a specific action you can invoke to simulate “flicking”. What’s nice is that, even here, you don’t need to attempt to calculate view coordinates. Instead, you use a zero-to-one coordinate system, where {x:0.0, y:0.0} is the top left and {x:1.0, y:1.0} is the bottom right. (But don’t actually use 1.0 for a view that spans the width or height of the entire device, because that’s an invalid offscreen coordinate.) So here’s what I do:

lastRow.flickInsideWithOptions({startOffset:{x:0.9, y:0.5}, endOffset:{x:0.0, y:0.5}})

Now, on the visible screen, the button that appears in the “flicked” row just says “Delete”. But in the accessibility world, it’s called “Confirm Deletion for {name of cell}”. So to get a reference to that button, you need to do something like this:

var deleteButton = lastRow.buttons().firstWithPredicate("name beginswith 'Confirm Deletion'")
deleteButton.tap()

The attempt to get a reference to that button actually triggers another cool feature of UI Automation: timeouts. If the button doesn’t exist when your code first asks for it, it waits by default for 5 seconds before giving up. That’s very handy (and also something you can extend to a longer timeout if necessary), but unfortunately doesn’t cover all cases.

For the “add row” test, I check, after clicking the plus button, that the row count has increased by 1. I could in theory wait for the existence of a cell with a particular name, that’s something that as far as I can tell would invoke the UI Automation timeout feature, but in this particular case that wouldn’t work. The cell name depends on the exact second the button was pressed, something I can’t guarantee will be the same if I also attempt to capture the time for myself in a separate variable. (It would work most of the time…) But since I don’t have anything to hang a timeout off of, every so often, my row count check occurs before the business of adding the new row is complete, leading to a mysterious test failure. In order to be completely sure, I needed to add my own polling:

var oldCount = tableView.cells().length
var expectedCount = oldCount + 1

addButton.tap()

var newCount
    
for (var i = 0; i < 12; i++) {
    newCount = tableView.cells().length
        
    if (newCount == expectedCount) {
        UIALogger.logPass("Added entry correctly")
        break
    }
    
    UIATarget.localTarget().delay(.25)
    UIALogger.logDebug("Delaying...")
}
    
if (newCount != expectedCount) {
    UIALogger.logFail("Pressing Add Entry (plus) button should result in " + expectedCount + " rows, but instead resulted in " + newCount + " rows")
}

Extra, messy timeout logic is something I talked about in my podcast episode, and it’s disappointing to find the same issue here with no elegant solutions.

The same holds true for the “delete row” test; because I’m comparing row counts with no timeout, it fails every so often, so I added a similar delaying loop.

All of this code is available in my “Automatic for the People” GitHub project.

Other Annoyances
There were a few other things that annoyed me as I worked on this.

Instruments has this concept of “importing” a test script, a state of affairs where Instruments ill-advisedly owns the file. If you change it elsewhere, you’ll be prompted in Instruments to revert to that version or use the Instruments version each time you start testing, even though Instruments shouldn’t have made any of its own changes. I see no reason for this, and it gets extremely tedious to keep clicking the “Revert” button each time you run the script again. It’s obvious that the authors of this feature did not expect the script to be under active development during testing. (rdar://2325401)

There’s no way for the script to tell Instruments that it’s done, so Instruments keeps running forever once the tests have finished. You have to stop it manually each time yourself. The documentation even mentions this, but that doesn’t make it right. (rdar://2326401)

The Bigger Problem
That said, I can’t really complain at the low level. UI Automation gives you the tools you need to run these tests successfully.

But there are two bigger issues.

The first is something I touched on in the podcast: such simple cases as these are exactly the sorts of things that are never going to fail. Or at least that never fail in my experience. And if you spend a lot of time writing tests that will never fail, are they really worth it? The only time I’ve found simple tests to be useful is when you can use them during the initial development to run through a bunch of cases that you could never try by hand. That may be a worthwhile use of UI Automation as well, time will tell.

The second is that the UI failures I have seen involve aspects of the user interface that UI Animation can’t measure.

In one case, I had a table view that, due to a change I made, began to stutter when it scrolled. As far as I can tell, there is no “is scrolling smoothly” property of the UIAScrollView accessibility element. I can’t even imagine how they would implement it. In the 2010 WWDC session, they mention using other instruments in concert with the automation instrument to track down performance issues, but that requires a person to notice the problem first.

I’m going to keep playing around with it, it’s got a lot of potential. But even though I titled my podcast “You Can’t Run a Script to Test Feel” without knowing about UI Automation, it seems to me that the sentiment still rings true.

 

1. Thanks to Ben Rimmington for the links, and esp. for cracking the code to link to a specific WWDC session video!

2. Because, after all, you’re the friggin’ Batman and Robin of Objective-C.

%d bloggers like this: