The Sound of My Voice

Want to download something that contains my dulcet tones other than an Edge Cases podcast episode? If so, you might want to act fast. (Or you may not: see Updates at bottom.)

As you probably know, for the moment, you can go and access Apple’s WWDC presentations for 2012, 2011, and 2010 directly from their website. You can also access them through iTunes, where you can download them (in high def or standard def) and have access to them forever, something I recommend if there’s a particular older session you find useful.

If you look closely on that iTunes page, however (again, for the moment), you’ll see that there’s also a link for WWDC 2009 sessions, both Mac and iPhone.

And if you look in those, you’ll find, under both sections, Session 418 “Customizing Xcode for Your Development Workflow”. Or you could follow these links directly: current Mac link, current iPhone link. Let me know if those don’t work. (Update: the Mac link may not work for everyone.)

That was the last session I helped to present at WWDC.

It’s interesting for me to go back and listen to how I sound and how I present myself. No “Um”s or “So”s here! No stupid jokes. (Well, fewer stupid jokes.) We practiced over and over and over again. Practiced the wording, practiced the pacing, practiced the demos.

For Edge Cases, we don’t practice at all, and there is at most a few hours of preparation. So: a lot less polished, but I can speak my mind. It’s worth the tradeoff, I think.

Note that knowing me or the other presenters is probably the only reason to download this. In the session, we’re talking about Xcode 3, since superseded by Xcode 4 and later WWDC sessions.

And I suspect that sometime after they get around to releasing the sessions for WWDC 2013, promised to be during the conference itself in a few weeks, the oldest sessions will go away for good.

P.S. Speaking of which, did anyone download the 2007 (found it!) and 2008 (found it!) 2006 WWDC sessions when they were available? I’d love to get a copy of my session from that year, too.

Update #1: Thanks to Neil in the comments, I see that there are in fact some older sessions available back to 2004.

Whether you think older session downloads are in danger depends on whether you think there were more/all sessions original available for those years, and they dropped some of them over time, or whether you think that these limited sessions were all that were ever available, and so no culling has ever occurred.

I kind of hope it’s the former, because while the newly-revealed sessions do include the 2008 session I was looking for, they don’t include the 2007 session I wanted. So: still looking for that.

Update #2: OK, so I’ve found the 2007 and 2008 sessions I wanted, which were there all the time. My fault! But it turns out I had forgotten I’d done a session in 2006, too. The 2006 sessions are really sparse, and I don’t think I’m making a mistake when I say I can’t find my session there. So, at the risk of repeating myself: still looking for that.

Adventure Addendum

I omitted a couple of things from my latest Edge Cases topic, text adventure games (“A Programmer and a Puzzler”), due to time constraints, forgetfulness, etc., so I wanted to talk about them here.

First, I wanted to mention the very first text adventure game I played as a kid, Mission Impossible by Scott Adams. It was on a big, bulky cartridge that I plugged into my home computer. I remember getting hopelessly stuck at one point, and thinking, well, that’s it. There was no Internet to consult. In hindsight, I could’ve dialed up some sort of electronic bulletin board for hints, but that wasn’t something I knew how to find back then.

I played it again after recording the podcast, since now it’s available for free online (see above link), and…got completely stuck again. I had to turn to a walk-through, which made it entirely unenjoyable for me to continue playing. Still not any good at puzzles, it seems.

Second, I mentioned in the podcast that the TADS syntax I used to write my games was very similar to C language syntax. Now, that’s true of the TADS language. But the vastly more popular IF programming language Inform (which I also mentioned) has a syntax based on natural language. That syntax looks quite different and can be quite a bit easier to write. Check out this link for a screencast that introduces that syntax and the Inform development application, which has a lot of neat features. If you’re going to start writing a new IF game, try Inform first.

Third, I made it sound in the podcast like there were no graphical adventure games before Myst, which is wrong. While Myst heralded an era of CD-based games with much more rich multimedia content, there were plenty of graphical games distributed on floppies beforehand. I even played one of them: Indiana Jones and the Fate of Atlantis, which I enjoyed because its puzzles were exceptionally easy.

And finally, I mentioned on the podcast that I liked how the free games published by authors using languages like Inform were much more likely to survive platform transitions, like the PowerPC to Intel transition of OS X, because they were data files, not full executables. This was in contrast to commercial games like the ones from Infocom, which had been reissued for the Mac, but many years ago, and were no longer runnable.

These days, however, if you search on Infocom on the iOS App Store on iTunes, you’ll find an entry Lost Treasures of Infocom, including many (but not all) of the games from the 80s, available to download for free. (Though you’ll have to pay $10 to actually unlock all the games.)

The way these games were updated for iOS deserves its own blog post, so I’ll be doing that at some point. As a preview, I’ll say: I wish they’d done it better.

A Plist Apart

Or, a Story in Eight Pictures

We’ve all used Xcode’s special plist editor, which has a structured editing environment so you don’t have to maintain the XML formatting yourself, and provides a bunch of standard Info.plist keys. Very useful.

Xcode's plist editor

But if you do want look at the XML for a plist file in your project, it’s easy right-click on the file in the navigation pane and, under Open As, choose “Source Code”.

Xcode's Open As submenu

Xcode's source code editor

But what if it’s a standalone file? There’s no navigator pane, so there’s nothing to right-click on to bring up the contextual menu.

Non-project plist file in Xcode

Luckily, you can press Cmd-zero, or use the View → Navigators → Show Navigator menu item. This opens the navigator pane, which, here, only shows the one file instead of the contents of an entire project.

Xcode's View / Navigators / Show Navigator menu item

Then, you can right-click on the file as before.

Open As again

Many standalone Info.plist files are saved as binary, however, and Xcode won’t automatically translate that to text for you. But if you open the File menu, and hold down the Option key, you’ll see the Save As menu item, which will let you save over the existing binary as Property List XML.

Xcode's Save As menu item

Xcode's Save As plist options

The trick here, at least in Xcode 4.6, is that it still won’t let you look at the file as Source Code unless you close and reopen it.

Portrait Schmortrait

Since you did so well with my last iOS rotation question, I’ve got another one for you.

In the code I’m working on, we show a login screen if you’ve timed out, by inserting the login view as a subview of the current view. We should be using presentViewController:animated:completion: or some sort of navigation controller push, but we’re not, and can’t switch to anything like that for this release.

The login screen, on the iPhone and iPod touch, should be in portrait. (Again, whether this is the best idea or not is a question for another time.)

The standard rotation APIs are good at keeping you from switching away from your current orientation — for example, preventing you from switching from portrait to landscape if you started out in portrait. But what they aren’t good at doing is, if you’re already in landscape, forcing the view to show itself in portrait. And you need this if you want to actually enforce a portrait-only view, so it seems like an important omission to me.

There is a hack that works in iOS 5, documented in this Stack Overflow question, to force a view to display in a particular orientation regardless of the physical orientation. But it doesn’t work in iOS 6.

So anybody know of a new hack that will work in iOS 6? Or, better yet, a less hacky fix that will work in all OSes? I’ve made a new GitHub project, Portrait Schmortrait, that demonstrates the problem (and the working iOS 5 hack). Thoughts? Contact me in the comments here or on Twitter.

Keyboard Schmeeboard

(Updated! See bottom.)

There’s a lot to dislike in Apple’s rotation APIs.

Let’s start with how they changed, from iOS 5 to iOS 6, with no buffer period.

Normally, when a new API is introduced, the old one is deprecated but still works the same way for a few more major versions. Here, a new version was introduced, and the old version stopped working immediately. (What was the rush?)

Then, there’s how the keyboard is handled. If you have the keyboard visible on your iOS device, and you rotate the device from portrait to landscape or vice versa, you’ll see the keyboard change its width (and change its height slightly), but otherwise remain visible throughout. So you’d think that the underlying APIs would reflect that.


Instead, as far as the official APIs are concerned, your application’s code is notified that the keyboard is hidden, then that the view is rotated, then that the keyboard is shown again.

This totally screws you over if you have content near the bottom of your view, that you want to animate smoothly along with the (always visible) keyboard.

I have a GitHub project, (also) called Keyboard-Schmeeboard (because it’s a good name), which demonstrates this problem. You’ll have to comment out the #define kIOS5Workaround and #define kIOS6Workaround lines in ViewController.m to see the broken behavior on iOS 5 and iOS 6.

As the above line suggests, however, there are workarounds. On iOS 5, you can override shouldAutorotateToInterfaceOrientation: to tell yourself a rotation is taking place before you’re told the keyboard is being hidden. Then, you simply ignore the keyboard-will-hide notification entirely, and use the keyboard-will-show notification to animate your view to the correct new location.

Note, you can’t do it in the rotation call where you would normally do it if the keyboard weren’t visible, because you don’t know the new height of the keyboard yet. You could hardcode a keyboard height in the rotation call, but that ignores localized keyboards of variable height, so it’s a bad idea.

The animation information you’re given in the keyboard-will-show method doesn’t exactly match the actual rotation animation, but it’s all you’re going to get, and it’s close enough. In Keyboard Schmeeboard, to see the mismatch, uncomment the #define kIOS5Workaround line again in ViewController.m, and when you rotate the app with the keyboard present, look for the telltale slivers of pink color as the animation proceeds, due to the background view showing through. In a real-world project, I’d make sure the background view had a color matching the content view, so that such minor differences were unnoticeable.

(It occurs to me as I’m writing this that I could save off the information from the rotation call, and use it during the keyboard-will-show notification. Too fiddly? Worth exploring, anyway.)

In iOS 6, there’s a similar call you can override, supportedInterfaceOrientations. But here’s the rub. Unlike the iOS 5 workaround, this iOS 6 call is also called if you just dip your device back and then up again, without rotating it either to the right or the left. If you go ahead and try this on an iOS 6 device (being sure to uncomment the #define kIOS6Workaround line again in ViewController.m), and then hide the keyboard, the space where the keyboard was will remain pink. Why? Because your “is rotating with keyboard” flag was set in supportedInterfaceOrientations and never turned back off. (Because you weren’t actually rotating.)

I’m actually looking for a solution to this problem right now, so if this rings any bells for people, try out the GitHub project, and let me know your thoughts in the comments or via Twitter.


Joel Bernstein tweets:

@apontious You may be able to disable both hacks and use UIViewAnimationOptionBeginFromCurrentState. Tried it, seems to work.

This does indeed work.

What he means is, dispense with trying to identify when you’re rotating or not. Instead, just implement the keyboard-will-show and keyboard-will-hide animations without any extra logic. But, for keyboard-will-show, specify UIViewAnimationOptionBeginFromCurrentState as one of the animation options.

This will override the changes that keyboard-will-hide was attempting to make, and just animate properly from the previous keyboard location to the next keyboard location. Neat.

I’ve updated the GitHub project to have a second project in it, Corrected Keyboard Schmeeboard, with this fix.

Thanks, Joel!

Automatic for the People

UI Automation is a iOS framework, introduced in 2010, that (not surprisingly) lets you set up automated tests for your Cocoa application user interface.

It’s also something, as several listeners pointed out, that I completely neglected to mention in my recent podcast episode on automated tests, because I hadn’t heard about it before myself.

Gadzooks! Mea culpa!

To find out more, I had a look at the Automated UI Testing section of the Instruments User Guide, and I watched the 2010 WWDC session “Automating User Interface Testing with Instruments” (requires ADC account).1

As it says in the WWDC video, UI Automation wasn’t built for me. It was made for QA automation engineers. (Specfically, Apple’s own QA automation engineers.) So it doesn’t make use of the compiler infrastructure like Xcode’s unit tests do. Instead of writing your tests in Objective-C, you write them in JavaScript, which takes a bit of getting used to when you haven’t written any JavaScript in over 10 years.2

I’ve tried it out now. There’s a lot to like. But there are some gotchas, timing issues still crop up, and in the bigger picture, I have the same doubts about them as I do about regular unit tests.

Getting Started
I used an Xcode project created from the template Master-Detail Application, with Core Data thrown in for good measure. If you remember, that project has a table with a plus button, which, when pushed, adds a row with the current time:

iOS table with several rows and plus button

There’s no minus button, but when you slide leftward on a row with your finger, a Delete button appears which lets you delete that row:

iOS table with several rows, one of which has Delete button

So I put together two UI Automation tests for those bits of functionality.

You access the controls/views/etc. of your application through a UI accessibility element hierarchy. I had been afraid that this accessibility layer might diverge from the actual Cocoa controls in important ways, but they seem to be the same, at least for my two simple tests.

The Tests Themselves
For the “add row” test, I needed to add an accessibility label to the plus button, which wasn’t there in the original Objective-C template code:

addButton.accessibilityLabel = @"Add Entry";

In my JavaScript code, I access that button through its accessibility label, and tap it:

var addButton = UIATarget.localTarget().frontMostApp().navigationBar().buttons()["Add Entry"]

You never need to work in screen or view coordinates of any sort, which is a relief. If you don’t want to find the element by its accessibility label, you can also do so through its subview index.

For the “delete row” test, I access the last row of the table:

var tableView = UIATarget.localTarget().frontMostApp().mainWindow().tableViews()[0]
var lastRow = tableView.cells()[tableView.cells().length -1]

In addition to tapping and a bunch of other useful actions, there’s a specific action you can invoke to simulate “flicking”. What’s nice is that, even here, you don’t need to attempt to calculate view coordinates. Instead, you use a zero-to-one coordinate system, where {x:0.0, y:0.0} is the top left and {x:1.0, y:1.0} is the bottom right. (But don’t actually use 1.0 for a view that spans the width or height of the entire device, because that’s an invalid offscreen coordinate.) So here’s what I do:

lastRow.flickInsideWithOptions({startOffset:{x:0.9, y:0.5}, endOffset:{x:0.0, y:0.5}})

Now, on the visible screen, the button that appears in the “flicked” row just says “Delete”. But in the accessibility world, it’s called “Confirm Deletion for {name of cell}”. So to get a reference to that button, you need to do something like this:

var deleteButton = lastRow.buttons().firstWithPredicate("name beginswith 'Confirm Deletion'")

The attempt to get a reference to that button actually triggers another cool feature of UI Automation: timeouts. If the button doesn’t exist when your code first asks for it, it waits by default for 5 seconds before giving up. That’s very handy (and also something you can extend to a longer timeout if necessary), but unfortunately doesn’t cover all cases.

For the “add row” test, I check, after clicking the plus button, that the row count has increased by 1. I could in theory wait for the existence of a cell with a particular name, that’s something that as far as I can tell would invoke the UI Automation timeout feature, but in this particular case that wouldn’t work. The cell name depends on the exact second the button was pressed, something I can’t guarantee will be the same if I also attempt to capture the time for myself in a separate variable. (It would work most of the time…) But since I don’t have anything to hang a timeout off of, every so often, my row count check occurs before the business of adding the new row is complete, leading to a mysterious test failure. In order to be completely sure, I needed to add my own polling:

var oldCount = tableView.cells().length
var expectedCount = oldCount + 1


var newCount
for (var i = 0; i < 12; i++) {
    newCount = tableView.cells().length
    if (newCount == expectedCount) {
        UIALogger.logPass("Added entry correctly")
if (newCount != expectedCount) {
    UIALogger.logFail("Pressing Add Entry (plus) button should result in " + expectedCount + " rows, but instead resulted in " + newCount + " rows")

Extra, messy timeout logic is something I talked about in my podcast episode, and it’s disappointing to find the same issue here with no elegant solutions.

The same holds true for the “delete row” test; because I’m comparing row counts with no timeout, it fails every so often, so I added a similar delaying loop.

All of this code is available in my “Automatic for the People” GitHub project.

Other Annoyances
There were a few other things that annoyed me as I worked on this.

Instruments has this concept of “importing” a test script, a state of affairs where Instruments ill-advisedly owns the file. If you change it elsewhere, you’ll be prompted in Instruments to revert to that version or use the Instruments version each time you start testing, even though Instruments shouldn’t have made any of its own changes. I see no reason for this, and it gets extremely tedious to keep clicking the “Revert” button each time you run the script again. It’s obvious that the authors of this feature did not expect the script to be under active development during testing. (rdar://2325401)

There’s no way for the script to tell Instruments that it’s done, so Instruments keeps running forever once the tests have finished. You have to stop it manually each time yourself. The documentation even mentions this, but that doesn’t make it right. (rdar://2326401)

The Bigger Problem
That said, I can’t really complain at the low level. UI Automation gives you the tools you need to run these tests successfully.

But there are two bigger issues.

The first is something I touched on in the podcast: such simple cases as these are exactly the sorts of things that are never going to fail. Or at least that never fail in my experience. And if you spend a lot of time writing tests that will never fail, are they really worth it? The only time I’ve found simple tests to be useful is when you can use them during the initial development to run through a bunch of cases that you could never try by hand. That may be a worthwhile use of UI Automation as well, time will tell.

The second is that the UI failures I have seen involve aspects of the user interface that UI Animation can’t measure.

In one case, I had a table view that, due to a change I made, began to stutter when it scrolled. As far as I can tell, there is no “is scrolling smoothly” property of the UIAScrollView accessibility element. I can’t even imagine how they would implement it. In the 2010 WWDC session, they mention using other instruments in concert with the automation instrument to track down performance issues, but that requires a person to notice the problem first.

I’m going to keep playing around with it, it’s got a lot of potential. But even though I titled my podcast “You Can’t Run a Script to Test Feel” without knowing about UI Automation, it seems to me that the sentiment still rings true.


1. Thanks to Ben Rimmington for the links, and esp. for cracking the code to link to a specific WWDC session video!

2. Because, after all, you’re the friggin’ Batman and Robin of Objective-C.

Asterisk and Obelisk

I mentioned in the podcast that I was having some trouble upgrading mogenerator that I hadn’t investigated yet.

Specifically, I was using mogenerator 1.23, and had tried upgrading to a newer version, I believe 1.25. This led to some build errors, at which point I set it aside, until this week.

Mystery now solved! What happened was that the replacement text for the <$Attribute.objectAttributeType> template macro changed from not including the trailing asterisk of an Objective-C object pointer to including it. Due to the old behavior, my template files always added their own asterisk after that macro. So when I upgraded, every usage became a pointer-to-a-pointer. While you can use pointers-to-pointers in property declarations, you can’t give them the “retain” attribute, so clang complained.

Updating both mogenerator and my templates fixed the problem. The trouble is, if I want to build an older version of my code, I’ll have to downgrade mogenerator again or make a one-time branch off the old commit with the template updates. So I hope mogenerator doesn’t need to make a change like this again.

Poetry in Motion

As I mentioned in the latest podcast, I’ve been playing way too much of Loren Brichter’s new word game, Letterpress, so I’ve had time to think about its user interface.

One of the things that keeps its simple, almost stark UI from looking amateurish to me is the quality of the animations. It’s not just that there is animation—that’s assumed for an iOS application—it’s that it has a particular character.


Take a look at the progress indicator. It’s a nonstandard shape, dots making up what looks kinda like a mostly-filled beaker, and it rotates to show network activity. But it doesn’t just rotate; it overshoots its rotation, and then settles back. And it doesn’t just turn in the same direction endlessly. Sometimes (but not always) it doubles back one.

These movements aren’t just a rote application of a standard iOS animation, a straight line from point A to point B. They look real, physical, like something alive, like it’s dancing.

The Macstories article on Letterpress (which has screenshots and videos of other cool animations in the app) also mentions that its style is similar to Microsoft’s new Metro user interface. Now, I haven’t used Metro or Windows 8. But if Windows has the same kind of sparseness to its visuals, does it also compensate for that with lively animations? It seems like it would need to, in order to engender the same positive reactions.

And woe to any developer saying, “Oh, I can do UI like that! It’s simple!” Any time saved with the appearance will just be eaten up trying to get the more complex animations to work.

Wild Schemes

While I work with them every day, I never really understood Xcode 4 schemes until recently.1

What changed was that I listened to Apple’s WWDC 2012 session 408 “Working with Schemes and Projects in Xcode”, presented in part by the esteemed @rballard, which explains them very well and in great detail.

Now, I can’t give you a link to this session, because it’s hidden behind an ADC login. You have to go to, sign in, and search for the title. This, for me, is an argument for Apple to release its WWDC sessions to the public, at least the older ones, with permanent links and searchable keywords, because this one session alone would banish the frustration and confusion of many developers wrestling with Xcode 4.

In Xcode 3, in general, you had a combination of simple actions (the Build action, the Run action), and build configurations (the Debug build configuration, the Release build configuration). You couldn’t make new actions, you could only make new build configurations — useful if you wanted, say, to make two different kind of release builds. You’d switch to the desired configuration, and then perform the action.

In Xcode 4, you can still make new build configurations if you wish (Project editor view → project → Info tab), but you can’t switch between them directly from the UI like you could in Xcode 3. Instead, you switch between schemes.

Schemes are built on top of all the older infrastructure: projects, targets, and build configurations. They can’t work without it, but they’re really an entirely different beast.

Every scheme has five predefined actions: Run, Test, Profile, Analyze, and Archive. Make a new scheme? Get five actions. Make a second scheme? Get five more actions.

More accurately, you get a bunch of settings for those five actions. To see them for yourself, choose your scheme name and Edit Scheme… from the Xcode 4 toolbar, like so:

Menu to Edit a Scheme.png

and you’ll see the scheme-editing sheet. There, select Run:

Editing a Scheme.png

The first setting? The build configuration to use. That’s how the old and the new are tied together. The Run, Test, and Analyze actions by default use the Debug configuration, while Profile and Archive use the Release configuration. I can see the logic in it, but it takes a little while to get your head around it. Xcode 4 is saying, “We don’t expect you to switch back and forth between build configurations. Just use the right action for the job, and we’ll take care of which build configuration to use for you.” (Well, if it could talk.2)

Hey, hey, hey: didn’t you forget something? What about building?

Turns out, building isn’t one of the scheme actions. That’s because it’s a prerequisite of each of the five actions. Want to run? Gotta build first. Want to Analyze? Gotta build first.

One neat thing about schemes is that, for each action, you can choose what to build:

Scheme Build.png

When you first create an app project, and you configure Xcode to create a test target in addition to an app target, it sets up your default app scheme such that, for the Test action, it builds both your test target and your app target. You don’t need an extra test target scheme.

(I actually go further, and check the test target checkbox for the Run scheme action too, so I know immediately if code changes break my test build.)

But for every target you create after that, by default Xcode auto-creates a new, separate scheme for it. That…works, I guess, but I prefer to take advantage of the configurable build stage to ensure I only need one scheme per project.

Let’s say, in my sample project, I make a new MyLibrary target. This creates a MyLibrary scheme, which I don’t want. So I go to manage my schemes:

Menu to Manage Schemes.png

And I delete the MyLibrary scheme using the minus button:

Delete Scheme.png

(Note: this scheme-managing sheet is also where you can turn off the preference to auto-generate schemes.)

Then I go back to the build view of my remaining scheme and add in the MyLibrary target using the plus button:

Add Target to Scheme.png

Now, a library isn’t the best example here, because you normally want to set a build dependency between the library target and the app target, and that dependency will ensure the library gets built regardless of whether it’s explicitly part of the scheme. A better example would be an entirely separate binary, like a plugin or a command-line tool. But I think you get the picture.

Two more points:

First, things get more complicated if you want additional build configurations, such as a “Debug with Special Profiling” or “Release for Beta-Testers”. Because the build configurations are built into the scheme actions, if you want to switch build configurations, you either need to hand-edit your scheme each time before you build (blech) or make a new scheme for the new build configuration — and get your five new actions, even if, for example, you only need to duplicate the Run action, or the Archive action. Neither solution is ideal.

My opinion is, if you’re going to bake build configurations into actions, allow users to create new custom actions for a scheme.

Second, have a look at the Xcode 4 Product menu:

Product Menu.png

You’ll see menu items for five scheme actions, Run, Test, Profile, Analyze, and Archive, at the top. Good so far.

But there’s also a Build menu item. What’s up with that?

And even more curious, there are also separate “Build” menu items for almost all the actions in their own submenu:

Build for Menu.png

and “Without Building” menu items for some of the actions in their own submenu:

Without Building Menu.png

So even though conceptually a scheme fuses a build pre-action with an action — that’s their whole point — Xcode 4 pretty much gives you every opportunity to invoke those parts of a scheme separately.

The fact that Xcode provides this much flexibility is very good, but I have to think — if breaking with the original concept is so important that there are eight menu items to work around it…perhaps the original concept needs some tweaking?

(Also, if you were wondering: the Build menu item is the same as “Build for Running”.)


1. Which is unfortunate, because I was working on Xcode (albeit a different part of it) when they were invented.

2. Which it can:

App Chowder

When I complain about this or that inadequacy in Xcode, there’s always a small but persistent chorus singing the praises of JetBrains’s Cocoa IDE AppCode.1

I’m planning on trying it2, but before I do, I figured I’d lay down some “claim chowder”<tm Gruber> explaining my doubts.

Heinz 57
The trouble with any company attempting to insert itself in another company’s value chain is that they’re playing a constant, unsuccessful game of catchup. GnuStep, anyone? Mono? Every new version is a chance for things to break, every new feature is something else to fall behind on.

My guess is that AppCode won’t support a lot of the things that are built in to Xcode, from minor editing conveniences to essential features. I already know you can’t edit xibs in it, though their promo page claims that refactoring will work in xibs and storyboards, which would be impressive.

I suspect there will be at least one dealbreaker in this category. And by “dealbreaker” I mean, something I have to go back to Xcode for often enough that I might as well just keep using Xcode.

Foreign Exchange
While the screenshot on their promo page doesn’t look that bad, I wonder if I won’t be able to stand their non-native UI. (One thing that Xcode has going for it now is that at least it looks good.) I’m assuming AppCode, like their Java IDE IntelliJ IDEA, is written in Java, not native Objective-C, and that’s why it doesn’t use standard controls; its UI drawing is meant to be cross-platform. So there may also be UI behaviors that feel alien and throw me off.

If the UI doesn’t do the trick, speed issues might. The Java runtime on the Mac was never known for its speed. On the other hand, Xcode itself can be slow and laggy at times, so it will be interesting to see where AppCode lands in comparison. I won’t be trying it with any massive projects, so that’s good (might work better) and bad (won’t be able to judge how it handles them).

THIS One Goes There, THAT One Goes There
I’m also worried about integration. Much of the reason Xcode used to be so bad was that it couldn’t link against gcc directly. Clang is now deeply integrated into Xcode, including its index and its code editor. Does AppCode have to do double the work to get the same result? Or does it try to parse the source code in its own, not-quite-matching way, leading to weird inconsistencies? Will builds be as fast? Will there be cryptic errors when I try something nobody thought of to integrate properly? Can I really trust it to edit project and workspace files (whose formats are undocumented)?

To some degree, this is unfair. Xcode itself it full of weird bugs; expecting AppCode to be perfect is holding it to a different standard.

But I’d rather not have bugs on top of my bugs.

One Is the Loneliest Number
My final concern is that, even if I get everything working properly, even if my productivity skyrockets, I’m still going to be off doing something different than 99.9% of the Cocoa engineers out there. It’s going to take its toll mentally. It’s ironic that I would say this for a platform whose motto once was “Think Different”, but there you go.

But if I start now, at least I’ll have something to talk about for the podcast next week!


1. Still requires Xcode to be installed, because it uses the Xcode command line tool xcodebuild to build.

2. $99, but a 30-day trial period, which is smart.