Adventures in Tera

As promised, the Stargate SG-Fun podcast website is up and running.

In related news: I don’t always make websites, but when I do, I prefer static site generators: systems that, rather than constructing website pages right when you visit them using databases and complicated server software, instead run their potentially complicated creation logic ahead of time. That way, the site, after each behind-the-scenes update, always exhibits the same, inert, safe pages to the world.

When my Edge Cases cohost Wolf Rentzsch set up our Edge Cases podcast website (now back from the dead!), he used Jekyll.

When I, Michael Helmbrecht, Mike Critz, and Samuel Giddins set up the updated NSCoderNight website, we used Middleman. In that case, the system bitrotted so thoroughly even within four years that I had to abandon it completely!

So when I went looking around for an SSG to use for SG-Fun, I knew I wouldn’t be using that. And when I did pick one, I picked Zola, an SSG that prided itself on having “no dependencies”. This is, of course, lie — it requires a ton of other components in order to be installed correctly. But I guess, in theory, because it’s a compiled Rust binary, it’s in less danger of being at the mercy of updated or abandoned thirty-party libraries over time, like Middleman was.


There are two ways to install Zola: brew and MacPorts.

Attempting to install via brew hung on my computer, and all my attempts to update brew — even to try to uninstall and reinstall brew — were unsuccessful. A great start! (Edit: this Mastodon post might help, but I haven’t tried it.)

MacPorts worked better, probably because I could install/update MacPorts by downloading a macOS package file and running that.

Once I had MacPorts working, I could use that to install Zola as described in the Zola docs.

Zola Basics

Zola uses the Tera template engine, created by Netlify, a well-funded startup aimed at enterprise websites. This allows Zola, a one-person open source project, to have a much more capable templating engine that it would be able to create for itself otherwise. Many SSGs adopt external systems for precisely this reason. Tera has loops, conditionals, string manipulation, math operators, macros, all the logic I needed for my relatively simple website.

The downside of such an approach is that, now, you have to look in two sets of documentation to figure out how to do things, instead of just one.

And even then, it was often difficult! I recommend doing what I did, which is download a couple of the theme example sites that are listed in Zola, to see how they put things together.

Zola, like most simple SSGs, has a concept of “pages” (think: blog posts) driven by Markdown data files. These are then organized in “sections”. So you could, say, have a main section for your blog posts (or podcast episodes), and a separate section for an About page.

They also have pagination functionality. If you have two hundred blog posts, it can, with only a little bit of effort (check out those theme sites!), split up your posts into individual pages with the number of posts you specify, such as ten or twenty.

Single Source of Truth

I don’t like retyping the same information into more than one place, because that allows those multiple sources of truth to diverge. Programmers call it D.R.Y.: don’t repeat yourself.

Zola helps with this, but it isn’t perfect.

  • Zola can extract the date of a post from the filename, so I only had to put it there, not both there and in the file’s contents. Well…in the post filename and also the audio filename.
  • I could use macros to take the length of the podcast audio file in seconds, an integer, and convert it into hours + colon + minutes + colon + seconds I used on the web page, while still being able to use it as an integer in the RSS feed.
  • I did have to specify the podcast title in two formats: once in slug format in the post filename (and separately in the audio filename), and once as the full human-readable title. E.g. thanks-send-more and Thanks, Send More.

It all wound up being doable, but I wrote more find/replace logic than I was expecting. Luckily, the Tera programming language, while simple, was up for the task.

One interesting wrinkle is that the “post” file (i.e. podcast episode file), which is an .md file, could not itself contain conditional logic, just plain old data. So when I wanted to use a title in quotes, like Unjustly Maligned Episode #1 “Stargate SG-1” with Jason Snell, I had to put the logic to convert the smart quotes into the appropriate format somewhere else. For HTML pages? “ and ”. But for the podcast RSS file, which would only accept Unicode characters? The vastly less human-readable x201C; and x201D;.

Another example: each podcast episode’s content section contains HTML tags like <p>, for display in web pages and in the <content:encoded> section of the RSS feed. But for the OpenGraph og:description section, that same content had to simply be divided by returns, not HTML tags. Luckily, there are Tera built-in functions to strip HTML tags, and to perform find/replace actions like “find every return and replace it with two returns”.


In addition to Tera, Zola includes Sass, a CSS preprocessor. Sassy was absolutely necessary to follow my policy of a single source of truth in the site’s CSS file.

I found myself wanting to, say, use a single value to set the left and right margins of a page, so I could iterate through different values without having to do a bunch of tedious copy and pasting throughout the CSS file.

Sass lets you use variables in your CSS file that are set to one value, but then used in a bunch of places. During site generation, it “compiles” your .scss file by turning all those variables into their concrete values, resulting in a regular CSS file.

It even lets you do simple math on these values.

I only use eight variables total in my .scss file, but it still felt like my visual adjustment/tweaking process became much easier.

One Last Thing

There wasn’t much that Zola wound up not being able to do for me, but there was one thing.

Zola won’t let you generate arbitrary individual files at the top level. Instead, you can only generate files of the pattern NameOfSection/index.html.

This is fine for “post” files (podcast episode files), but I wanted to use the power of the templating engine to automatically generate an .htaccess file for my site.

The best I could do was generate an htaccess/index.html file, and then write my own local script to move it to the right place.

Edge of Tomorrow

In this post I said, “As of now, I probably won’t resubmit [Edge Cases] to Apple Podcasts”.

I wasn’t going to.

But now I have.

And the details might be mildly interesting to someone.

I needed to log in to my Apple Podcasts Connect account anyway, for the sake of the podcast I’m on that we’re moving from the Incomparable.

Once I logged in, I saw the deactivated Edge Cases podcast. What the heck, I said to myself. Let’s try submitting it now that the files are back up, and see if it works.


But the only error was that we needed a larger artwork file.

Here’s the original (shrunk down so it doesn’t disrupt the flow of this article):

It’s just the title “edge cases” in white text on black, 512 by 512 pixels.

Apple’s error message said it needed to be 3,000 by 3,000 pixels.

I could have just taken the existing file and upscaled it, but I wanted to do better than that.

Whatever original art file Wolf used to generate this is lost in the seas of time. So to make a bigger version, I’d have to recreate it.

First problem: the font is unusual. I didn’t know what it was, and it didn’t appear to exist on my computer.

The Internet to the rescue! There are apparently websites out there which take an image, and spit back out the names of the fonts that are used in it.

I’m going to try not to think about how free websites like this make their money, probably by taking the image and uploading it to evil ChatGPT artwork generators, but hey, free service.

Here’s the one I used:

It told me that the font was “Gara”. Yeah, I definitely didn’t have that one already. Another Internet search told me that I could indeed download the Gara font for free from a variety of websites. The one I chose was FontZillion:

(It apparently was added to FontZillion a quarter century ago.)

So, first of all, I downloaded the font files and added them to my Mac via the Font Book application.

Then I went about recreating the logo, in Acorn. I upscaled the old image into a new 3,000 x 3,000 Acorn image file, and typed in, resized, and arranged the new text until it matched as much as it was going to. If you squint at this animated gif long enough, you’ll see the minute changes that occur when I switch from the old file to the new one:

I uploaded the new file to the website with the same name as the old one, waited a bit for that change to propagate, and bam, the next time I submitted the podcast to Apple, it was accepted.

Case closed!

A Podcast of Thousands

Let’s talk podcasts!

First, the website for Edge Cases, the developer podcast that Wolf Rentzsch and I hosted some years ago, is back online — in case you even noticed it was down.

To be clear: the podcast is still over. Apologies, anyone hoping for new episodes!

As of now, I probably won’t resubmit it to Apple Podcasts, so sorry if you wanted to find it there. You can always use the RSS feed link that’s on the website directly.

Second, let’s talk about my other podcasts.

I was on Team Cockroach, a podcast about the NBC comedy The Good Place, on the Incomparable podcast network. This podcast is also over (as is the TV show!) but if you’re looking for per-episode reviews and season wrap-ups from me and some lovely friends of mine, you might want to check it out.

A podcast I’m still on is Stargate SG-Fun, a podcast about the late-90s/early-2000s science fiction show Stargate SG-1, that (a) is currently also on the Incomparable podcast network, and (b) has been on hiatus. Both of those things are going to change! We’ll be switching to our own standalone website shortly, and will have new episodes up soon. I’ll let you know when that happens. While the TV show is long finished, me and some other lovely friends are still working our way through reviews of the noteworthy episodes of it, and are currently on season 2.

So there’s lots more to come.

ARM-Wrestling Your iOS Simulator Builds

Xcode 12 sometimes builds iOS Simulator builds for arm64 now, and this can cause problems.

Did you know that Xcode 12 builds both x86_64 and arm64 slices for the iOS Simulator now?

Only under certain circumstances, though.

If you build with xcodebuild, and specify the generic destination, like so:

xcodebuild -project Cat.xcodeproj -scheme Cat -destination "generic/platform=iOS Simulator"

and then you do this from the command line:

lipo -archs

you’ll see this:

x86_64 arm64

If you build the same thing within the Xcode application, specifying a particular simulator model, on any Mac now shipping, you’ll instead see this:


Looks like they’re thinking ahead to ARM-based Macs, eh?

This can cause problems.

Let’s say you have a pre-built framework, ready for the simulator and any iOS device.

When you try to link that framework under the above command, you’ll get an odd-sounding error, something like this:

building for iOS Simulator, but linking in object file built for iOS, for architecture arm64

That’s weird, right? It’s looking for the arm64 slice, and it found it! But because it’s categorized as for device, instead of for the simulator, the linker errors out.

You might say to yourself, I can fix this! I’ll rebuild my framework using Xcode 12!

You can, but it may involve more effort than you’re willing to put in right now.

The old way you make a framework for shipping is with lipo. But when you try to use lipo -create to combine (a) a device binary with ARM slices and (b) a simulator binary with ARM and Intel slices, you get an error:

lipo: simulator/Meow.framework/Meow and devices/Meow.framework/Meow have the same architectures (arm64) and can't be in the same fat output file

So that’s out.

The new way to make a framework for shipping is to make it an XCFramework.

As far as I can tell, even in Xcode 12, support for this is not built in to the application itself. You have to use xcodebuild, as described in this WWDC session. And your end product is no longer a .framework bundle, but rather an .xcframework bundle, requiring that every target that links against it be modified.

This is fine if you control all the code yourself, but what if you’re getting a framework from a third-party vendor? Are they ready to switch to an XCFramework right now?

In any case, unless you’ve gotten your hands on one of those shiny new developer kits from Apple, there’s absolutely no need for you to be building simulator builds for ARM just yet.

Instead, don’t build for ARM at all.

Go to your Target build settings, go to Architectures, and then go to the new setting Excluded Architectures (EXCLUDED_ARCHES), which Apple recommends you use instead of the older setting Valid Architectures (VALID_ARCHS).

There, hover over it with your mouse and click the + button that appears, and it will give you the option of adding a subheading called “Any iOS Simulator SDK”. Do that, and add an arm64 entry to the build setting’s list of values.

Screenshot of the Xcode build settings user interface, with an

You don’t want to specify this for any Debug build, as you could be building a Debug build for the device. Just the simulator.

You can also, instead of specifying it in the project, specify it in the xcodebuild invocation, like so:

xcodebuild -project Cat.xcodeproj -scheme Cat -destination "generic/platform=iOS Simulator" EXCLUDED_ARCHS=arm64

I hope this helps anyone who’s been puzzling over this issue!

Installing CocoaPods: What Works for Me

I’m making this post mostly to have a reminder for myself.

Recently, I wound up on a Mac that didn’t have CocoaPods installed.

The instructions on the Install tab of say to type this on the command line:

sudo gem install cocoapods

That does work.

But I run in to problems if I then move directly on to the instructions on the Get Started tab and make myself a Podfile and type this on the command line:

pod install

In my experience, if I do this, I get an error saying it can’t find whatever Pod I specify, even if I know that Pod exists and is available to me.

Quite frustrating.

The solution, which I found, like every self-respecting programmer does, on Stack Overflow, is to type this:

pod setup

For me, this command takes forever and eventually errors out, but it succeeds enough to allow my pod install command to start working.

So if you didn’t already know the magic command: now you do.

localizedUppercaseString and Localization

In the app I’m working on, we use all-uppercase strings for certain UI elements.

Sometimes that means, for our Localizable.strings file, if you were to import as-is all the strings from our NSLocalizedString API calls, you’ll have an entry for the title-case version as well as the all-caps version. For example, you might have both “My Profile” and “MY PROFILE” strings.

What I’d like to do (and I’m not alone in this idea) is only ever use the title-case strings in code, so that we have fewer and more consistent (and more flexible) entries in the strings file. If I need the all-cap version of that string, I’ll use an Apple API like localizedUppercaseString to get it.

So instead of having this in your code, and two entries in your strings file:

NSLocalizedString(@"My Profile", @"Title for My Profile section of user interface");
NSLocalizedString(@"MY PROFILE", @"Title for My Profile section of user interface");

You would have this, and only one entry in your strings file:

NSLocalizedString(@"My Profile", @"Title for My Profile section of user interface");
NSLocalizedString(@"My Profile", @"Title for My Profile section of user interface").localizedUppercaseString;

My question is whether this might lead to problems.

A quick Internet search tells me that only Roman, Greek, Cyrillic, and Armenian scripts even have the concept of upper case (source). But I’m also reading that there are ways in other languages and scripts to convey emphasis.

Would a human translator do a better job at appropriately conveying the uppercase nature of a string in other languages and scripts, in a way that Apple’s APIs would not? Or do Apple’s APIs give you basically the same results a translator would give you, or at least what people expect in most localized applications?

Does anyone have real-world experience with Apple’s APIs for this? I’m interested especially in non-European languages, where it will be harder for me to verify that the results are correct.

Let me know in the comments or on Twitter. Thanks!

Note 1: This post originally referred to uppercaseString by mistake; I always meant localizedUppercaseString.

Building a Better Ant Hill

Recently, I was tasked with answering the following question (actually two questions, but we’ll get to the second one at the end):

Is this:

@import Ant.Ant000.h;

going to compile faster than this:

@import Ant;

Restated more verbosely: in this era of modules, is it faster to import only the individual files you need from an module umbrella header, or does it make no difference, so you can rely on the simplicity of always importing the entire module in all your files?

I had always assumed the latter, but now I was being asked to prove it.

To do that, I made a new GitHub project, Import-Ant. Inside of it, you’ll find five Xcode projects: four test projects and a test builder project.

You may ask: why bother with a builder project? What do you need to build to conduct these sorts of tests?

Turns out, about 40,800 files.

I didn’t want differences between the two techniques listed above to get lost in the noise of a normal build, so I decided that my Ant framework (the thing to be imported) would have 100 header files — and a corresponding 100 source files — and my Hill iOS app (the thing doing the importing) would have quite a few more — 5,000 source files, each of whom would import one Ant header file.

To avoid having to make either those 200 header/source files, or those 10,000 header/source files, by hand, I wrote some code to do it for me, which resulted in the Builder project. There’s the AntBuilder class to make the Ant framework files, and the HillBuilder class to make the Hill app files.

Currently, there are four test projects that the Builder project will make files for:

  • 01 Import By Module
  • 02 Import Individually
  • 03 Import by Framework
  • 04 Import by File

The first two test projects address the problem described at the beginning of this post.

The second two test projects go more old school, converting the new module syntax back to straight-up C import syntax. Individual files:

#import <Ant/Ant000.h>

versus the umbrella header:

#import <Ant/Ant.h>

So instead of just one Ant and Hill project pair, there are four of them.

To test build times, I would reboot each time, open all four projects, wait for them to finish indexing, and then build one of them. After writing down that build time, I would clean that project’s build folder, then go back and start the cycle over again….

I built for Debug to keep it simple and I used the default Simulator target that came up when opening the project, either the iPhone 8 or the iPhone 8 Plus.

This command-line invocation helped:

defaults write ShowBuildOperationDuration -bool YES

It makes Xcode show the most recent build time in its user interface, like so:

Screenshot of portion of Xcode main window show build result 'Succeeded' with extra section '120.774s'

Here are average times:

01 Import By Module: 124.796s
02 Import Individually: 121.823s
03 Import by Framework: 126.342s
04 Import by File: 122.121s

The differences were between 0% and 4%, which I don’t find to be all that significant, for two reasons.

For one, I only built each project 3 times, and each test series had its own outliers. I suspect if I’d had the patience to build them 10 times, the differences would have smoothed out more. I’ve also since realized Xcode may take up significant amounts of CPU time even after its UI indicates that indexing has finished, lending more randomness to the proceedings.

For two, I actually built the first two projects 3X each separately before building all four projects for this post, and in that case, 01 Import By Module was faster than 02 Import Individually by 2%.

If you’re not convinced, you can certainly run them for yourself.

But for me, I think this proves there isn’t a significant penalty for using full module imports instead of trying to pick out individual module files to import.

The second question was whether this syntax influenced which files would be rebuilt if an Ant framework header was modified. Now, every individual Ant class is used by 50 Hill classes. If only, say, Ant000.h was modified, and only 50 Hill source files referenced it directly, would only those source files be rebuilt?

Turns out no. In all four test cases, two of which involved only references to specific Ant headers, the entirety of the Hill project was rebuilt if even only one Ant header was modified. Rebuild the module (in this case, the Ant framework), and everything that relies on any part of that module is also rebuilt by the current version of Xcode.

Sound right? I consider myself far from an expert in this area, so if anyone has any more information, feel free to leave a comment or ping me on Twitter. Thanks!

Restoring Transience

While doing some Core Data research, I came across my old GitHub project (from this post) demonstrating transient attributes.

I decided to update my project to current coding and Core Data practices, as an exercise, and I discovered a couple interesting, if minor, points.

1. Managed Object Context Uses Weak References

The whole purpose of the project was that, if I tried to fetch the same objects in two different Core Data contexts, the transient attributes wouldn’t be preserved.

But now, I found that even doing the same fetch in the same context would return different Objective-C objects, and thus would not preserve the transient attributes for any objects that I had made previously. What had changed? What was going on?
Transient app window showing three rows, with two having nil name attributes, and only the third having a non-nil name

What had changed, as far as I can see, is that Core Data is far more aggressive in deleting in-memory objects that don’t have any references to them except the context. Since my original project was doing a fetch every time it wanted the list of objects, and keeping no permanent reference to them, that meant that every object except the most recent one was going away and being recreated, and thus their transient attributes were not being preserved.

I’ve changed the project to keep its own list of the objects it has created so far, so they’ll stick around until I click the “Refresh” button.

This also means that I don’t need multiple contexts. I can just nil out my own list (and call reset on the context to be sure), and I’ll get new model object instances for my next fetch. This means that I can update my code to use the new NSPersistentContainer class and its main-thread-only viewContext for all my work, without worrying about maintaining multiple main-thread contexts myself.

2. There’s a Trick to Editing a Managed Object Model at Runtime

In my original project, the model was set to not use a transient attribute. If you wanted to test transient attributes for yourself, you had to go in and manually change the model file in my project, rebuild, and run it again.

This time around, I decided to do better.

So while I still left that attribute as non-transient on disk, I added some code to edit the model in memory before it is used, and tied that value to a new checkbox in the user interface. This, the comments in NSManagedObjectModel assure me, is totally allowed and supported.

Transient app window showing a new checkbox on the right labeled 'Transient'

Now, if you toggle that checkbox (which deletes the current list contents), you’ll change the behavior to either use a transient name attribute (so that refreshes will nil out the names) or a non-transient name attribute (so that refreshes won’t nil out the names).

The trick? The instance of the model you load from disk can’t be edited at all, even before its use in a persistent container. You have to make a copy of it.

3. In-Memory Stores Can’t Be Transferred

My original project used an on-disk persistent data store, but deleted it every time the app started up.

This time around, instead, I used an in-memory persistent data store, which resets itself on every restart with no muss, no fuss. (This is also very useful for unit tests.)

Now above, I said that if you toggle the “Transient” checkbox, all the current database contents are deleted, right? That’s because I have to throw away the current model, and make a new one with the transient attribute handled in a different way.

If I were using an on-disk persistent store, I could just reload the contents from disk using that new model.

But since I’m using an in-memory persistent store, there’s no on-disk backup to turn to.

And the APIs that Apple provides in NSPersistentStoreCoordinator, as far as I can see, do not allow you to detach an existing store from a coordinator and re-attach it to a new coordinator. It always assumes you can reload the store contents from a file on disk, which makes a new store object.

I’ve long believed that, even though Apple tends to say Core Data is an object management framework independent of storage mechanism, that’s just hogwash. No company I’ve ever worked at uses Core Data for anything serious without backing it with a SQLite database, and all of Core Data’s heavy-duty features are geared towards that configuration.

Here, as we can see, even their APIs favor one kind of store over another. Which is as it should be! But I wish they’d stop pretending.

Hosted vs. Unhosted Keychain Tests in the Simulator in Xcode 9

Did you know that there’s a regression in Xcode 9’s support for automated tests involving the Keychain?

To show you, I’ve updated the Secrets test application to have two unit test targets, “Secrets Hosted Tests” and “Secrets Unhosted Tests”. They are both unit test targets and, as advertised, the first runs against the Secrets application, and the second does not, relying on Apple’s built-in mechanism to run unit tests. This means the second target needs to include the necessary SAMKeychain files instead of relying on the app to provide them.

Both targets execute only one test, the exact same one: after checking and trying to delete any previous entry, it tries to save a string to the Keychain:

func testSave() {
    do {
        let _ = try SAMKeychain.password(forService: "MyService", account: "MyTestAccount")

        try SAMKeychain.deletePassword(forService: "MyService", account: "MyTestAccount")
    } catch let error as NSError {
        if (error.code != Int(errSecItemNotFound)) {

    XCTAssertNoThrow(try SAMKeychain.setPassword("Foo", forService: "MyService", account: "MyTestAccount"),

If you run these two tests in Xcode 8 in the Simulator against iOS 9 and against iOS 10, they succeed under iOS 9 and they succeed under iOS 10. Good so far.

But if you run them in Xcode 9, with the exact same Simulators, only the hosted tests succeed under all versions of the OS: iOS 9, iOS 10, and iOS 11. The unhosted tests only succeed under iOS 9. They fail under iOS 10 with the error -34018 (“A required entitlement isn’t present”), and they fail under iOS 11 with the error -50 (“One or more parameters passed to a function were not valid”).

Puzzling, isn’t it? It can’t be an entitlements issue or a parameter issue, because the exact same thing works when tested with a host application, and works under an earlier Xcode.

I have some anecdotal evidence that Xcode 8 in its early releases had similar issues with Keychain testing in the Simulator, fixed in later versions, so here’s hoping Apple can once again fix these issues in later versions of Xcode 9. I’ve filed a Radar about it.

Keychain Reaction

In my previous post, I talked about a sample app I made that demonstrates Keychain entry persistence across app relaunches and app reinstalls.

What I didn’t talk about was what a pain in the ass working with the Keychain is.

Over the years, I’ve seen a lot of codebases that included a lot of utility classes to make dealing with those ancient, ancient C-based Keychain APIs a little easier.

What I haven’t been able to find is a modern Swift library that does so. For Secrets, I wound up just copying in the files from Sam Soffes’s SAMKeychain library, which from my cursory googling seemed like one of the most recently-updated Keychain helper libraries. But it’s in Objective-C.

Any Swift-y Keychain libraries out there?