Obduction Seduction

Obduction is a relatively recent graphical adventure game by the creators of Myst. I played it recently and have some thoughts.

Hey, have you heard of Obduction? A graphical adventure game by the guys who made Myst, but released within the last decade?

In most ways, it really is just another Myst, though the story and setting are unrelated. Did you love Myst? Then you’ll love Obduction.

Never played Myst? How do lush graphics, fantastical world building, and atmospheric music sound to you?

“Adventure games” are also known as “interactive fiction”, because there’s a story behind all the locked doors, all the unexplained mysteries, all the obstacles you have to overcome.

The best IF shines where the time it takes you to solve the puzzles adds to the suspense of the narrative.

If that’s true, then boy did my experience have a lot of suspense.

There were two points where I got really stuck. Just couldn’t think of how to move forward. Got more and more frustrated.

In the past, I’ve often given in and looked at hints or walkthroughs. The trouble with that, for me at least, is that once I’ve looked at one hint, it’s almost impossible not to look at the next, and the next. The game becomes a plodding exercise in following instructions, and I almost always give up.

With Obduction, given that, nowadays, I personally have many hours inside with few distractions, I decided to tough it out. And indeed, even if it took days and days, after looking around over and over, I would eventually have a stray thought come in to my head, a new thing to try. The puzzle never turned out to be particularly fiendish or unexpected, it was always something simple I missed.

The game works fine on modern hardware, with one exception: the documents (and there are many) are blurred and almost unreadable on Retina displays. The only way I was able to read them was to connect my laptop up to an older, non-Retina display, and switch the game to full-screen mode.

I suspect that the graphical optimizations from as little as 7 years ago don’t play well with Retina.

I took copious notes and, based on my experiences and the under-annotated drawings provided in the game itself, constructed detailed maps. I suppose in the end that’s why I’m writing this post: to show off my maps. While everything else in this post is light on spoilers, the maps have a lot of spoilers in them, so only look if you don’t plan on playing the game.

Enjoy!

ARM-Wrestling Your iOS Simulator Builds

Xcode 12 sometimes builds iOS Simulator builds for arm64 now, and this can cause problems.

Did you know that Xcode 12 builds both x86_64 and arm64 slices for the iOS Simulator now?

Only under certain circumstances, though.

If you build with xcodebuild, and specify the generic destination, like so:

xcodebuild -project Cat.xcodeproj -scheme Cat -destination "generic/platform=iOS Simulator"

and then you do this from the command line:

lipo -archs Cat.app/Cat

you’ll see this:

x86_64 arm64

If you build the same thing within the Xcode application, specifying a particular simulator model, on any Mac now shipping, you’ll instead see this:

x86_64

Looks like they’re thinking ahead to ARM-based Macs, eh?

This can cause problems.

Let’s say you have a pre-built framework, ready for the simulator and any iOS device.

When you try to link that framework under the above command, you’ll get an odd-sounding error, something like this:

building for iOS Simulator, but linking in object file built for iOS, for architecture arm64

That’s weird, right? It’s looking for the arm64 slice, and it found it! But because it’s categorized as for device, instead of for the simulator, the linker errors out.

You might say to yourself, I can fix this! I’ll rebuild my framework using Xcode 12!

You can, but it may involve more effort than you’re willing to put in right now.

The old way you make a framework for shipping is with lipo. But when you try to use lipo -create to combine (a) a device binary with ARM slices and (b) a simulator binary with ARM and Intel slices, you get an error:

lipo: simulator/Meow.framework/Meow and devices/Meow.framework/Meow have the same architectures (arm64) and can't be in the same fat output file

So that’s out.

The new way to make a framework for shipping is to make it an XCFramework.

As far as I can tell, even in Xcode 12, support for this is not built in to the application itself. You have to use xcodebuild, as described in this WWDC session. And your end product is no longer a .framework bundle, but rather an .xcframework bundle, requiring that every target that links against it be modified.

This is fine if you control all the code yourself, but what if you’re getting a framework from a third-party vendor? Are they ready to switch to an XCFramework right now?

In any case, unless you’ve gotten your hands on one of those shiny new developer kits from Apple, there’s absolutely no need for you to be building simulator builds for ARM just yet.

Instead, don’t build for ARM at all.

Go to your Target build settings, go to Architectures, and then go to the new setting Excluded Architectures (EXCLUDED_ARCHES), which Apple recommends you use instead of the older setting Valid Architectures (VALID_ARCHS).

There, hover over it with your mouse and click the + button that appears, and it will give you the option of adding a subheading called “Any iOS Simulator SDK”. Do that, and add an arm64 entry to the build setting’s list of values.

Screenshot of the Xcode build settings user interface, with an

You don’t want to specify this for any Debug build, as you could be building a Debug build for the device. Just the simulator.

You can also, instead of specifying it in the project, specify it in the xcodebuild invocation, like so:

xcodebuild -project Cat.xcodeproj -scheme Cat -destination "generic/platform=iOS Simulator" EXCLUDED_ARCHS=arm64

I hope this helps anyone who’s been puzzling over this issue!

Seeing It All in Roll20

I’ve been playing a lot of Dungeons & Dragons lately. You might have suspected if you saw my current Twitter account icon.

Since the Pandemic started, my campaigns have all taken place over the Internet. The way most people play D&D over the Internet is through a site called Roll20, which gives you easy access to your character information, maps, and a bunch of other things.

Roll20 is a very powerful website, free to use, and…a little fiddly. If you’re a GM (Game Master) for a game on Roll20, and you’ve already gone through the in-editor tutorials and tried things out for yourself, there’s a couple of steps I’ve found that you can follow to make the experience better for your players.

1. Visible Character Sheets
I’ve found it helpful for each player to be able to see, not just their own character sheet, but the character sheets of all the other players in the game.

When, as a GM, you first create a character sheet for a player, you need to set both who can see that sheet, and who that sheet is controlled and editable by.

These are modified by clicking the character name to open the sheet, then clicking the edit Edit button, and finally going to the In Player’s Journals and Can Be Edited & Controlled By sections, respectively.

Most GMs start out by setting both fields only to the individual player who owns the character.

But if, instead, you set In Player’s Journals to the special All Players option, that character will be visible to all existing players, including the controlling player, and any new players you add, without any further work from you. That’s what I would recommend.

Screenshot of the edit view for a character sheet. The "In Player's Journals" section has been set to a single token called "All Players", and the "Can Be Edited & Controlled By" section below it has been set to a single token called "Player 1".

2. Visible Token Labels
Now that you’ve made the character sheets, you or the controlling player can drag those character tokens on to the current map page. (Be sure to start the drag in the character’s name, not the icon.)

By default, this doesn’t show the name of the character, either to you or to the players.

You can change this, first, by clicking the token on the map to select it, then clicking the gear icon.

Under the Basic tab, in the Name section, there is a checkbox labeled Show nameplate? If you check that, the character’s name will be visible to both you and the controlling player.

Screenshot of the edit view for a map token, with the "Basic" tab selected. The "Name" section has a checkbox called "Show nameplate?" that has been checked.

If you want the label to be visible to everyone, which I would recommend, go to the Advanced tab and, in the Name section, check the See checkbox.

Screenshot of the edit view for a map token, with the "Advanced" tab selected. The "Name" section has a checkbox called "See" that has been checked.

Note the players can’t set these values for themselves. You need to do it as the GM, for every dragged-out token, individually.

Unfortunately, these changes aren’t “sticky”. If someone drags out a second token for a character, say, on a new map page, these changes have to be made all over again. That’s annoying!

Instead, select the tokens that you’ve already edited and that you want to appear on another page, and copy them. Go to the second page, and then paste the tokens there. This way, you’ll have the tokens available on the second page, with all your changes.

I hope this is helpful!

Installing CocoaPods: What Works for Me

I’m making this post mostly to have a reminder for myself.

Recently, I wound up on a Mac that didn’t have CocoaPods installed.

The instructions on the Install tab of https://cocoapods.org/ say to type this on the command line:

sudo gem install cocoapods

That does work.

But I run in to problems if I then move directly on to the instructions on the Get Started tab and make myself a Podfile and type this on the command line:

pod install

In my experience, if I do this, I get an error saying it can’t find whatever Pod I specify, even if I know that Pod exists and is available to me.

Quite frustrating.

The solution, which I found, like every self-respecting programmer does, on Stack Overflow, is to type this:

pod setup

For me, this command takes forever and eventually errors out, but it succeeds enough to allow my pod install command to start working.

So if you didn’t already know the magic command: now you do.

localizedUppercaseString and Localization

In the app I’m working on, we use all-uppercase strings for certain UI elements.

Sometimes that means, for our Localizable.strings file, if you were to import as-is all the strings from our NSLocalizedString API calls, you’ll have an entry for the title-case version as well as the all-caps version. For example, you might have both “My Profile” and “MY PROFILE” strings.

What I’d like to do (and I’m not alone in this idea) is only ever use the title-case strings in code, so that we have fewer and more consistent (and more flexible) entries in the strings file. If I need the all-cap version of that string, I’ll use an Apple API like localizedUppercaseString to get it.

So instead of having this in your code, and two entries in your strings file:

NSLocalizedString(@"My Profile", @"Title for My Profile section of user interface");
NSLocalizedString(@"MY PROFILE", @"Title for My Profile section of user interface");

You would have this, and only one entry in your strings file:

NSLocalizedString(@"My Profile", @"Title for My Profile section of user interface");
NSLocalizedString(@"My Profile", @"Title for My Profile section of user interface").localizedUppercaseString;

My question is whether this might lead to problems.

A quick Internet search tells me that only Roman, Greek, Cyrillic, and Armenian scripts even have the concept of upper case (source). But I’m also reading that there are ways in other languages and scripts to convey emphasis.

Would a human translator do a better job at appropriately conveying the uppercase nature of a string in other languages and scripts, in a way that Apple’s APIs would not? Or do Apple’s APIs give you basically the same results a translator would give you, or at least what people expect in most localized applications?

Does anyone have real-world experience with Apple’s APIs for this? I’m interested especially in non-European languages, where it will be harder for me to verify that the results are correct.

Let me know in the comments or on Twitter. Thanks!

Note 1: This post originally referred to uppercaseString by mistake; I always meant localizedUppercaseString.

Building a Better Ant Hill

Recently, I was tasked with answering the following question (actually two questions, but we’ll get to the second one at the end):

Is this:

@import Ant.Ant000.h;

going to compile faster than this:

@import Ant;

Restated more verbosely: in this era of modules, is it faster to import only the individual files you need from an module umbrella header, or does it make no difference, so you can rely on the simplicity of always importing the entire module in all your files?

I had always assumed the latter, but now I was being asked to prove it.

To do that, I made a new GitHub project, Import-Ant. Inside of it, you’ll find five Xcode projects: four test projects and a test builder project.

You may ask: why bother with a builder project? What do you need to build to conduct these sorts of tests?

Turns out, about 40,800 files.

I didn’t want differences between the two techniques listed above to get lost in the noise of a normal build, so I decided that my Ant framework (the thing to be imported) would have 100 header files — and a corresponding 100 source files — and my Hill iOS app (the thing doing the importing) would have quite a few more — 5,000 source files, each of whom would import one Ant header file.

To avoid having to make either those 200 header/source files, or those 10,000 header/source files, by hand, I wrote some code to do it for me, which resulted in the Builder project. There’s the AntBuilder class to make the Ant framework files, and the HillBuilder class to make the Hill app files.

Currently, there are four test projects that the Builder project will make files for:

  • 01 Import By Module
  • 02 Import Individually
  • 03 Import by Framework
  • 04 Import by File

The first two test projects address the problem described at the beginning of this post.

The second two test projects go more old school, converting the new module syntax back to straight-up C import syntax. Individual files:

#import <Ant/Ant000.h>

versus the umbrella header:

#import <Ant/Ant.h>

So instead of just one Ant and Hill project pair, there are four of them.

To test build times, I would reboot each time, open all four projects, wait for them to finish indexing, and then build one of them. After writing down that build time, I would clean that project’s build folder, then go back and start the cycle over again….

I built for Debug to keep it simple and I used the default Simulator target that came up when opening the project, either the iPhone 8 or the iPhone 8 Plus.

This command-line invocation helped:

defaults write com.apple.dt.Xcode ShowBuildOperationDuration -bool YES

It makes Xcode show the most recent build time in its user interface, like so:

Screenshot of portion of Xcode main window show build result 'Succeeded' with extra section '120.774s'

Here are average times:

01 Import By Module: 124.796s
02 Import Individually: 121.823s
03 Import by Framework: 126.342s
04 Import by File: 122.121s

The differences were between 0% and 4%, which I don’t find to be all that significant, for two reasons.

For one, I only built each project 3 times, and each test series had its own outliers. I suspect if I’d had the patience to build them 10 times, the differences would have smoothed out more. I’ve also since realized Xcode may take up significant amounts of CPU time even after its UI indicates that indexing has finished, lending more randomness to the proceedings.

For two, I actually built the first two projects 3X each separately before building all four projects for this post, and in that case, 01 Import By Module was faster than 02 Import Individually by 2%.

If you’re not convinced, you can certainly run them for yourself.

But for me, I think this proves there isn’t a significant penalty for using full module imports instead of trying to pick out individual module files to import.

The second question was whether this syntax influenced which files would be rebuilt if an Ant framework header was modified. Now, every individual Ant class is used by 50 Hill classes. If only, say, Ant000.h was modified, and only 50 Hill source files referenced it directly, would only those source files be rebuilt?

Turns out no. In all four test cases, two of which involved only references to specific Ant headers, the entirety of the Hill project was rebuilt if even only one Ant header was modified. Rebuild the module (in this case, the Ant framework), and everything that relies on any part of that module is also rebuilt by the current version of Xcode.

Sound right? I consider myself far from an expert in this area, so if anyone has any more information, feel free to leave a comment or ping me on Twitter. Thanks!

Restoring Transience

While doing some Core Data research, I came across my old GitHub project (from this post) demonstrating transient attributes.

I decided to update my project to current coding and Core Data practices, as an exercise, and I discovered a couple interesting, if minor, points.

1. Managed Object Context Uses Weak References

The whole purpose of the project was that, if I tried to fetch the same objects in two different Core Data contexts, the transient attributes wouldn’t be preserved.

But now, I found that even doing the same fetch in the same context would return different Objective-C objects, and thus would not preserve the transient attributes for any objects that I had made previously. What had changed? What was going on?
Transient app window showing three rows, with two having nil name attributes, and only the third having a non-nil name

What had changed, as far as I can see, is that Core Data is far more aggressive in deleting in-memory objects that don’t have any references to them except the context. Since my original project was doing a fetch every time it wanted the list of objects, and keeping no permanent reference to them, that meant that every object except the most recent one was going away and being recreated, and thus their transient attributes were not being preserved.

I’ve changed the project to keep its own list of the objects it has created so far, so they’ll stick around until I click the “Refresh” button.

This also means that I don’t need multiple contexts. I can just nil out my own list (and call reset on the context to be sure), and I’ll get new model object instances for my next fetch. This means that I can update my code to use the new NSPersistentContainer class and its main-thread-only viewContext for all my work, without worrying about maintaining multiple main-thread contexts myself.

2. There’s a Trick to Editing a Managed Object Model at Runtime

In my original project, the model was set to not use a transient attribute. If you wanted to test transient attributes for yourself, you had to go in and manually change the model file in my project, rebuild, and run it again.

This time around, I decided to do better.

So while I still left that attribute as non-transient on disk, I added some code to edit the model in memory before it is used, and tied that value to a new checkbox in the user interface. This, the comments in NSManagedObjectModel assure me, is totally allowed and supported.

Transient app window showing a new checkbox on the right labeled 'Transient'

Now, if you toggle that checkbox (which deletes the current list contents), you’ll change the behavior to either use a transient name attribute (so that refreshes will nil out the names) or a non-transient name attribute (so that refreshes won’t nil out the names).

The trick? The instance of the model you load from disk can’t be edited at all, even before its use in a persistent container. You have to make a copy of it.

3. In-Memory Stores Can’t Be Transferred

My original project used an on-disk persistent data store, but deleted it every time the app started up.

This time around, instead, I used an in-memory persistent data store, which resets itself on every restart with no muss, no fuss. (This is also very useful for unit tests.)

Now above, I said that if you toggle the “Transient” checkbox, all the current database contents are deleted, right? That’s because I have to throw away the current model, and make a new one with the transient attribute handled in a different way.

If I were using an on-disk persistent store, I could just reload the contents from disk using that new model.

But since I’m using an in-memory persistent store, there’s no on-disk backup to turn to.

And the APIs that Apple provides in NSPersistentStoreCoordinator, as far as I can see, do not allow you to detach an existing store from a coordinator and re-attach it to a new coordinator. It always assumes you can reload the store contents from a file on disk, which makes a new store object.

I’ve long believed that, even though Apple tends to say Core Data is an object management framework independent of storage mechanism, that’s just hogwash. No company I’ve ever worked at uses Core Data for anything serious without backing it with a SQLite database, and all of Core Data’s heavy-duty features are geared towards that configuration.

Here, as we can see, even their APIs favor one kind of store over another. Which is as it should be! But I wish they’d stop pretending.

Hosted vs. Unhosted Keychain Tests in the Simulator in Xcode 9

Did you know that there’s a regression in Xcode 9’s support for automated tests involving the Keychain?

To show you, I’ve updated the Secrets test application to have two unit test targets, “Secrets Hosted Tests” and “Secrets Unhosted Tests”. They are both unit test targets and, as advertised, the first runs against the Secrets application, and the second does not, relying on Apple’s built-in mechanism to run unit tests. This means the second target needs to include the necessary SAMKeychain files instead of relying on the app to provide them.

Both targets execute only one test, the exact same one: after checking and trying to delete any previous entry, it tries to save a string to the Keychain:

func testSave() {
    do {
        let _ = try SAMKeychain.password(forService: "MyService", account: "MyTestAccount")

        try SAMKeychain.deletePassword(forService: "MyService", account: "MyTestAccount")
    } catch let error as NSError {
        if (error.code != Int(errSecItemNotFound)) {
            print("\(error)")
        }
    }

    XCTAssertNoThrow(try SAMKeychain.setPassword("Foo", forService: "MyService", account: "MyTestAccount"),
                     "Throws!")
}

If you run these two tests in Xcode 8 in the Simulator against iOS 9 and against iOS 10, they succeed under iOS 9 and they succeed under iOS 10. Good so far.

But if you run them in Xcode 9, with the exact same Simulators, only the hosted tests succeed under all versions of the OS: iOS 9, iOS 10, and iOS 11. The unhosted tests only succeed under iOS 9. They fail under iOS 10 with the error -34018 (“A required entitlement isn’t present”), and they fail under iOS 11 with the error -50 (“One or more parameters passed to a function were not valid”).

Puzzling, isn’t it? It can’t be an entitlements issue or a parameter issue, because the exact same thing works when tested with a host application, and works under an earlier Xcode.

I have some anecdotal evidence that Xcode 8 in its early releases had similar issues with Keychain testing in the Simulator, fixed in later versions, so here’s hoping Apple can once again fix these issues in later versions of Xcode 9. I’ve filed a Radar about it.

Keychain Reaction

In my previous post, I talked about a sample app I made that demonstrates Keychain entry persistence across app relaunches and app reinstalls.

What I didn’t talk about was what a pain in the ass working with the Keychain is.

Over the years, I’ve seen a lot of codebases that included a lot of utility classes to make dealing with those ancient, ancient C-based Keychain APIs a little easier.

What I haven’t been able to find is a modern Swift library that does so. For Secrets, I wound up just copying in the files from Sam Soffes’s SAMKeychain library, which from my cursory googling seemed like one of the most recently-updated Keychain helper libraries. But it’s in Objective-C.

Any Swift-y Keychain libraries out there?

Secrets and Lies

Turns out, my entire previous post was trying to solve a problem that doesn’t exist.

I assumed, because I’d heard rumors about it and found this authoritative-sounding forum post, that Apple had indeed removed the persistence of Keychain entries for an app if a user deleted the app from their iOS device.

But while Apple did this in a beta release, they didn’t ship it in the final version (thanks to Nick Lockwood for pointing this out to me).

I verified this for myself with the new sample project Secrets, and you can too by downloading and running it for yourself, on both iOS 10 and iOS 11.

The app just shows a simple view with one text field, where you can type in anything you’d like. It is then saved in the app’s Keychain.

If you kill the app and come back to it, that value is again displayed.

If you delete the app and reinstall it, the value is still displayed. This is true both in the iOS 10.3.1 Simulator and the iOS 11 Simulator in Xcode 9 beta 4.

Now, in the iOS 11 Simulator, due to a bug, you can’t delete an app through the regular Simulator user interface.

So instead, you must delete it by hand from the file system, by going to ~/Developer/Library/CoreSimulator/Devices/, finding the UUID that matches that of the Simulator you’re using, then within that finding the UUID of your app, inside data/Containers/Bundle/Application.

But if you go through all that trouble in iOS 11 Simulator, then restart the Simulator and verify that the app is gone, then reinstall it…the Keychain entry is still there.

So you don’t need to use Shared Web Credentials the way I described in my last blog post. You can continue to rely on local app Keychain entries to keep your users logged in, even if the user deletes your app and reinstalls it.

For now.