Scary Sharks and Custom CodingKeys

I had to handle a fun little challenge with Codable and unorthodox JSON recently (as you do).

Apple’s Codable API have been around for a while, and it’s an example of the best kind of API: it makes the easy things easy, and the hard things possible.

For example, let’s say I’d like to load in some JSON data about sharks in movies:

[
    {
	"type": "Great White Shark",
	"movie": "Jaws",
        "length": 15
    },
    {
        "type": "Megalodon",
        "movie": "The Meg",
        "length": 50
    },
    {
        "type": "Mako Shark",
        "movie": "Deep Blue Sea",
        "length": 14
    }
]

Make a struct with the same fields, let loose your decoder, and you’re done!

struct MovieShark: Codable {
    let type: String
    let movie: String
    let length: Float
}

let sharks = try JSONDecoder().decode([MovieShark].self, from: data)

(Note: there are a few more steps to get running code, but you get the gist.)

If you need to fiddle with the keys a bit, say, because your data uses shark_type instead of just type, you add a CodingKeys entry:

    enum CodingKeys: String, CodingKey {
        case type = "shark_type"
        case movie
        case length
    }

I’ve tried (admittedly not that hard) to figure how how CodingKeys works.

  • How can defining a nested enum of a particular name cause this behavior?
  • How does the Swift enum type automatically conform to CodingKey?

Googling didn’t turn up any details on this. And, I mean, it doesn’t matter, right?

Well, sometimes it does.

The challenge I was facing was that my data wasn’t in the format shown above, but rather in this format:

[
    {
        "Great White Shark" : {
            "movie": "Jaws",
            "length": 15
	}
    },
    {
        "Megalodon": {
            "movie": "The Meg",
            "length": 50
        }
    },
    {
        "Mako Shark": {
            "movie": "Deep Blue Sea",
            "length": 14
        }
    }
]

The simple Codable approach can’t handle top-level dynamic keys like this. My question was, can I use Codable to do this at all?

And the answer is yes.

Turns out, CodingKeys doesn’t have to be an enum.

You can implement this protocol with a struct that has all its requirements: a string property and initializer, and an integer property and initializer.

Once you’ve done that, of course, you don’t have the static keys you still need, so I stashed those in a separate, unrelated enum called OtherCodingKeys.

struct MovieShark {
    let type: String
    let movie: String
    let length: Float
    
    struct CodingKeys: CodingKey {
        var stringValue: String
        init(stringValue: String) {
            self.stringValue = stringValue
        }

        var intValue: Int? { return nil }
        init?(intValue: Int) { return nil }
    }

    enum OtherCodingKeys {
        case movie
        case length
    }
}

You’ll notice MovieShark no longer declares itself as implementing Codable here. That’s because I need to implement custom versions of Encodable and Decodable separately.

First, Encodable:

private struct MovieSharkContents: Codable {
    let movie: String
    let length: Float
}

extension MovieShark: Encodable {
    func encode(to coder: Encoder) throws {
        var container = coder.container(keyedBy: CodingKeys.self)
        try container.encode(MovieSharkContents(movie: movie, length: length), forKey: CodingKeys(stringValue: type))
    }
}

There are two steps:

  • Get the top-level container of the encoder with coder.container(keyedBy: CodingKeys.self). This is the standard way to start a custom encoding.
  • Specify a dynamic key by using the CodingKeys string initializer, CodingKeys(stringValue: type). You can’t just specify a random string, because that won’t be of the correct type.

Note, because the values of the subsequent dictionary are heterogenous, and Swift can’t serialize a [String: Any] type, I had to make an intermediate type, MovieSharkContents, to represent it.

Next, Decodable:

enum MovieSharkError: Error {
    case unableToDecode
}

extension MovieShark: Decodable {
     init(from coder: Decoder) throws {
         let container = try coder.container(keyedBy: CodingKeys.self)
         for key in container.allKeys {
             type = key.stringValue
             let contents = try container.decode(MovieSharkContents.self, forKey: key)
             movie = contents.movie
             length = contents.length
             return
         }
         throw MovieSharkError.unableToDecode
     }
}

Here, since the top-level key has an unknown name, I iterate through all the top-level keys, and pick the first.

I transfer that top-level key to the type property, and the dictionary values inside it to the other properties of MovieShark. If I don’t find any top-level key at all, I throw an exception.

In this way, I can keep a straightforward MovieShark struct with all the properties I expect, but also handle both loading and saving its custom JSON.

I figured out how to do this, by the way, from the very helpful Flight School Guide to Swift Codable. I still don’t get all the Swift magic behind CodingKeys, but I know a little more about how to use it!

Adventures in Tera

As promised, the Stargate SG-Fun podcast website is up and running.

In related news: I don’t always make websites, but when I do, I prefer static site generators: systems that, rather than constructing website pages right when you visit them using databases and complicated server software, instead run their potentially complicated creation logic ahead of time. That way, the site, after each behind-the-scenes update, always exhibits the same, inert, safe pages to the world.

When my Edge Cases cohost Wolf Rentzsch set up our Edge Cases podcast website (now back from the dead!), he used Jekyll.

When I, Michael Helmbrecht, Mike Critz, and Samuel Giddins set up the updated NSCoderNight website, we used Middleman. In that case, the system bitrotted so thoroughly even within four years that I had to abandon it completely!

So when I went looking around for an SSG to use for SG-Fun, I knew I wouldn’t be using that. And when I did pick one, I picked Zola, an SSG that prided itself on having “no dependencies”. This is, of course, lie — it requires a ton of other components in order to be installed correctly. But I guess, in theory, because it’s a compiled Rust binary, it’s in less danger of being at the mercy of updated or abandoned thirty-party libraries over time, like Middleman was.

Installation

There are two ways to install Zola: brew and MacPorts.

Attempting to install via brew hung on my computer, and all my attempts to update brew — even to try to uninstall and reinstall brew — were unsuccessful. A great start! (Edit: this Mastodon post might help, but I haven’t tried it.)

MacPorts worked better, probably because I could install/update MacPorts by downloading a macOS package file and running that.

Once I had MacPorts working, I could use that to install Zola as described in the Zola docs.

Zola Basics

Zola uses the Tera template engine, created by Netlify, a well-funded startup aimed at enterprise websites. This allows Zola, a one-person open source project, to have a much more capable templating engine that it would be able to create for itself otherwise. Many SSGs adopt external systems for precisely this reason. Tera has loops, conditionals, string manipulation, math operators, macros, all the logic I needed for my relatively simple website.

The downside of such an approach is that, now, you have to look in two sets of documentation to figure out how to do things, instead of just one.

And even then, it was often difficult! I recommend doing what I did, which is download a couple of the theme example sites that are listed in Zola, to see how they put things together.

Zola, like most simple SSGs, has a concept of “pages” (think: blog posts) driven by Markdown data files. These are then organized in “sections”. So you could, say, have a main section for your blog posts (or podcast episodes), and a separate section for an About page.

They also have pagination functionality. If you have two hundred blog posts, it can, with only a little bit of effort (check out those theme sites!), split up your posts into individual pages with the number of posts you specify, such as ten or twenty.

Single Source of Truth

I don’t like retyping the same information into more than one place, because that allows those multiple sources of truth to diverge. Programmers call it D.R.Y.: don’t repeat yourself.

Zola helps with this, but it isn’t perfect.

  • Zola can extract the date of a post from the filename, so I only had to put it there, not both there and in the file’s contents. Well…in the post filename and also the audio filename.
  • I could use macros to take the length of the podcast audio file in seconds, an integer, and convert it into hours + colon + minutes + colon + seconds I used on the web page, while still being able to use it as an integer in the RSS feed.
  • I did have to specify the podcast title in two formats: once in slug format in the post filename (and separately in the audio filename), and once as the full human-readable title. E.g. thanks-send-more and Thanks, Send More.

It all wound up being doable, but I wrote more find/replace logic than I was expecting. Luckily, the Tera programming language, while simple, was up for the task.

One interesting wrinkle is that the “post” file (i.e. podcast episode file), which is an .md file, could not itself contain conditional logic, just plain old data. So when I wanted to use a title in quotes, like Unjustly Maligned Episode #1 “Stargate SG-1” with Jason Snell, I had to put the logic to convert the smart quotes into the appropriate format somewhere else. For HTML pages? “ and ”. But for the podcast RSS file, which would only accept Unicode characters? The vastly less human-readable x201C; and x201D;.

Another example: each podcast episode’s content section contains HTML tags like <p>, for display in web pages and in the <content:encoded> section of the RSS feed. But for the OpenGraph og:description section, that same content had to simply be divided by returns, not HTML tags. Luckily, there are Tera built-in functions to strip HTML tags, and to perform find/replace actions like “find every return and replace it with two returns”.

Sassy

In addition to Tera, Zola includes Sass, a CSS preprocessor. Sassy was absolutely necessary to follow my policy of a single source of truth in the site’s CSS file.

I found myself wanting to, say, use a single value to set the left and right margins of a page, so I could iterate through different values without having to do a bunch of tedious copy and pasting throughout the CSS file.

Sass lets you use variables in your CSS file that are set to one value, but then used in a bunch of places. During site generation, it “compiles” your .scss file by turning all those variables into their concrete values, resulting in a regular CSS file.

It even lets you do simple math on these values.

I only use eight variables total in my .scss file, but it still felt like my visual adjustment/tweaking process became much easier.

One Last Thing

There wasn’t much that Zola wound up not being able to do for me, but there was one thing.

Zola won’t let you generate arbitrary individual files at the top level. Instead, you can only generate files of the pattern NameOfSection/index.html.

This is fine for “post” files (podcast episode files), but I wanted to use the power of the templating engine to automatically generate an .htaccess file for my site.

The best I could do was generate an htaccess/index.html file, and then write my own local script to move it to the right place.

Edge of Tomorrow

In this post I said, “As of now, I probably won’t resubmit [Edge Cases] to Apple Podcasts”.

I wasn’t going to.

But now I have.

And the details might be mildly interesting to someone.

I needed to log in to my Apple Podcasts Connect account anyway, for the sake of the podcast I’m on that we’re moving from the Incomparable.

Once I logged in, I saw the deactivated Edge Cases podcast. What the heck, I said to myself. Let’s try submitting it now that the files are back up, and see if it works.

Nope!

But the only error was that we needed a larger artwork file.

Here’s the original (shrunk down so it doesn’t disrupt the flow of this article):

It’s just the title “edge cases” in white text on black, 512 by 512 pixels.

Apple’s error message said it needed to be 3,000 by 3,000 pixels.

I could have just taken the existing file and upscaled it, but I wanted to do better than that.

Whatever original art file Wolf used to generate this is lost in the seas of time. So to make a bigger version, I’d have to recreate it.

First problem: the font is unusual. I didn’t know what it was, and it didn’t appear to exist on my computer.

The Internet to the rescue! There are apparently websites out there which take an image, and spit back out the names of the fonts that are used in it.

I’m going to try not to think about how free websites like this make their money, probably by taking the image and uploading it to evil ChatGPT artwork generators, but hey, free service.

Here’s the one I used: https://www.fontsquirrel.com/matcherator

It told me that the font was “Gara”. Yeah, I definitely didn’t have that one already. Another Internet search told me that I could indeed download the Gara font for free from a variety of websites. The one I chose was FontZillion: https://www.fontzillion.com/fonts/iaki-marqunez/gara

(It apparently was added to FontZillion a quarter century ago.)

So, first of all, I downloaded the font files and added them to my Mac via the Font Book application.

Then I went about recreating the logo, in Acorn. I upscaled the old image into a new 3,000 x 3,000 Acorn image file, and typed in, resized, and arranged the new text until it matched as much as it was going to. If you squint at this animated gif long enough, you’ll see the minute changes that occur when I switch from the old file to the new one:

I uploaded the new file to the https://edgecases.com website with the same name as the old one, waited a bit for that change to propagate, and bam, the next time I submitted the podcast to Apple, it was accepted.

Case closed!

A Podcast of Thousands

Let’s talk podcasts!

First, the website for Edge Cases, the developer podcast that Wolf Rentzsch and I hosted some years ago, is back online — in case you even noticed it was down.

To be clear: the podcast is still over. Apologies, anyone hoping for new episodes!

As of now, I probably won’t resubmit it to Apple Podcasts, so sorry if you wanted to find it there. You can always use the RSS feed link that’s on the website directly.

Second, let’s talk about my other podcasts.

I was on Team Cockroach, a podcast about the NBC comedy The Good Place, on the Incomparable podcast network. This podcast is also over (as is the TV show!) but if you’re looking for per-episode reviews and season wrap-ups from me and some lovely friends of mine, you might want to check it out.

A podcast I’m still on is Stargate SG-Fun, a podcast about the late-90s/early-2000s science fiction show Stargate SG-1, that (a) is currently also on the Incomparable podcast network, and (b) has been on hiatus. Both of those things are going to change! We’ll be switching to our own standalone website shortly, and will have new episodes up soon. I’ll let you know when that happens. While the TV show is long finished, me and some other lovely friends are still working our way through reviews of the noteworthy episodes of it, and are currently on season 2.

So there’s lots more to come.

Object Permanence in Roll20

About a year ago, I described how you, as GM (Game Master), could improve to your players’ token settings — but how these improvements couldn’t be made “sticky”, i.e. wouldn’t persist if your players dragged their token onto a new Roll20 page.

Turns out, they can be made permanent — if you take a particular extra step.

  1. First, follow all the steps from this post.
  2. Then, select the token on the current Roll20 map page, and go back to the Edit window.
  3. Now, in the Default Token (Optional) box, the Use Selected Token button should be enabled. Click that. This should make that selected token the default token, including all the settings changes you’ve just made to it.

Now, every time the player drags out that character’s token, it should be configured the way you want.

There’s no step four!

In My Bag of Holding: Helpful D&D Links

I talked previously about using Roll20 to play D&D during the pandemic, but there’s a plenty of other D&D resources I’ve also found handy.

In addition to Roll20, the other star of the show is D&D Beyond.

The good news: first, their online character sheets and character generation options are tremendous.

Second, they’ve got all the content from the Player’s Handbook, the Dungeon Master’s Guide, and everything else from Wizards of the Coast, like Tasha’s Cauldron of Everything, all usable in your character sheets, and all searchable without needing to fumble through multiple books to find what you need in the middle of a session.

The bad news: it’s commercial content, so anything beyond the basics isn’t available for free. This can be especially unfortunate if you’ve already bought the physical books, and now have to buy them again, at basically the same price, for online convenience.

But there’s more good news: if somebody else you know has already bought the content on D&D Beyond, they can share it with you. It’s especially good news if they’ve bought one of the bundles, which can include…well, everything. (And that’s pricey.)

Just, don’t get confused by D&D Beyond subscriptions. They don’t include the WotC content.

If you want to use the stats of your D&D Beyond character sheets in Roll20 during a session, such as your attack/damage/ability rolls, you can install the browser plug-in Beyond20. I use it, as do lots of other players I know.

(A quick aside: if you want to roll your D&D dice on your Mac or iPhone/iPad with a little more graphical flair, I recommend Dice by PCalc by James Thompson. No character sheet integration, but gorgeous graphics and some amazing About box Easter eggs.)

If you want to import your D&D Beyond character sheets, with all their details, directly in Roll20 instead of retyping or regenerating everything, you can use the BeyondImporter from Kyle B’s version of the Roll20APIScripts. You need a Roll20 Pro account to use scripts, and it’s a little fiddly (and subject to breakage over time), but it’s worked so far for me.

And, once you have your character sheet in Roll20, if you want to give your character’s avatar a nifty and colorful frame, you can use the free website Token Stamp to do so.

While the WotC game modules have maps for their stories, I often found myself wanting additional maps for the side adventures I create as a DM. So, I went looking for artists online who were providing these so-called “battle maps”.

The best I found was Party of Two, whose maps are gorgeous:

Preview Tumblr: https://partyoftwo.tumblr.com
Patreon: https://www.patreon.com/partyoftwo

I’ve been a happy Patreon subscriber of theirs for almost a year now, and have used their maps for:

  • A magic shop fronting a hidden dungeon
  • Rooms at an inn where the party was attacked by assassins
  • An extended cave system with a giant crab, sea hags, and a hydra
  • A lighthouse guarded by undead minotaurs

Their maps are lushly colored and intricate, and thus relatively specific, so it’s hard to slot them into encounters whose details I’d already written. I’ve solved this in two ways:

The easy way: rewrite the encounter, or start with the map and make up new details to match it.

The hard way: take pieces of Party of Two maps and edit them together with my own meager photoshopping skills. For this, I use, not Photoshop, but Acorn, by Gus Mueller. The cave system mentioned above is an example of this, as well as other, more ambitious projects, which will remain [REDACTED] for now.

And lastly, when I invent homebrew monsters for my encounters, I find myself wanting to display their stats in the layout the Monster Manual does: that familiar golden-yellow box.

While nothing I’ve found matches it exactly, Statbock5e comes the closest, while being customizable enough for my needs.

Here’s a quick-reference list of everything I’ve talked about:

Obduction Seduction

Obduction is a relatively recent graphical adventure game by the creators of Myst. I played it recently and have some thoughts.

Hey, have you heard of Obduction? A graphical adventure game by the guys who made Myst, but released within the last decade?

In most ways, it really is just another Myst, though the story and setting are unrelated. Did you love Myst? Then you’ll love Obduction.

Never played Myst? How do lush graphics, fantastical world building, and atmospheric music sound to you?

“Adventure games” are also known as “interactive fiction”, because there’s a story behind all the locked doors, all the unexplained mysteries, all the obstacles you have to overcome.

The best IF shines where the time it takes you to solve the puzzles adds to the suspense of the narrative.

If that’s true, then boy did my experience have a lot of suspense.

There were two points where I got really stuck. Just couldn’t think of how to move forward. Got more and more frustrated.

In the past, I’ve often given in and looked at hints or walkthroughs. The trouble with that, for me at least, is that once I’ve looked at one hint, it’s almost impossible not to look at the next, and the next. The game becomes a plodding exercise in following instructions, and I almost always give up.

With Obduction, given that, nowadays, I personally have many hours inside with few distractions, I decided to tough it out. And indeed, even if it took days and days, after looking around over and over, I would eventually have a stray thought come in to my head, a new thing to try. The puzzle never turned out to be particularly fiendish or unexpected, it was always something simple I missed.

The game works fine on modern hardware, with one exception: the documents (and there are many) are blurred and almost unreadable on Retina displays. The only way I was able to read them was to connect my laptop up to an older, non-Retina display, and switch the game to full-screen mode.

I suspect that the graphical optimizations from as little as 7 years ago don’t play well with Retina.

I took copious notes and, based on my experiences and the under-annotated drawings provided in the game itself, constructed detailed maps. I suppose in the end that’s why I’m writing this post: to show off my maps. While everything else in this post is light on spoilers, the maps have a lot of spoilers in them, so only look if you don’t plan on playing the game.

Enjoy!

ARM-Wrestling Your iOS Simulator Builds

Xcode 12 sometimes builds iOS Simulator builds for arm64 now, and this can cause problems.

Did you know that Xcode 12 builds both x86_64 and arm64 slices for the iOS Simulator now?

Only under certain circumstances, though.

If you build with xcodebuild, and specify the generic destination, like so:

xcodebuild -project Cat.xcodeproj -scheme Cat -destination "generic/platform=iOS Simulator"

and then you do this from the command line:

lipo -archs Cat.app/Cat

you’ll see this:

x86_64 arm64

If you build the same thing within the Xcode application, specifying a particular simulator model, on any Mac now shipping, you’ll instead see this:

x86_64

Looks like they’re thinking ahead to ARM-based Macs, eh?

This can cause problems.

Let’s say you have a pre-built framework, ready for the simulator and any iOS device.

When you try to link that framework under the above command, you’ll get an odd-sounding error, something like this:

building for iOS Simulator, but linking in object file built for iOS, for architecture arm64

That’s weird, right? It’s looking for the arm64 slice, and it found it! But because it’s categorized as for device, instead of for the simulator, the linker errors out.

You might say to yourself, I can fix this! I’ll rebuild my framework using Xcode 12!

You can, but it may involve more effort than you’re willing to put in right now.

The old way you make a framework for shipping is with lipo. But when you try to use lipo -create to combine (a) a device binary with ARM slices and (b) a simulator binary with ARM and Intel slices, you get an error:

lipo: simulator/Meow.framework/Meow and devices/Meow.framework/Meow have the same architectures (arm64) and can't be in the same fat output file

So that’s out.

The new way to make a framework for shipping is to make it an XCFramework.

As far as I can tell, even in Xcode 12, support for this is not built in to the application itself. You have to use xcodebuild, as described in this WWDC session. And your end product is no longer a .framework bundle, but rather an .xcframework bundle, requiring that every target that links against it be modified.

This is fine if you control all the code yourself, but what if you’re getting a framework from a third-party vendor? Are they ready to switch to an XCFramework right now?

In any case, unless you’ve gotten your hands on one of those shiny new developer kits from Apple, there’s absolutely no need for you to be building simulator builds for ARM just yet.

Instead, don’t build for ARM at all.

Go to your Target build settings, go to Architectures, and then go to the new setting Excluded Architectures (EXCLUDED_ARCHES), which Apple recommends you use instead of the older setting Valid Architectures (VALID_ARCHS).

There, hover over it with your mouse and click the + button that appears, and it will give you the option of adding a subheading called “Any iOS Simulator SDK”. Do that, and add an arm64 entry to the build setting’s list of values.

Screenshot of the Xcode build settings user interface, with an

You don’t want to specify this for any Debug build, as you could be building a Debug build for the device. Just the simulator.

You can also, instead of specifying it in the project, specify it in the xcodebuild invocation, like so:

xcodebuild -project Cat.xcodeproj -scheme Cat -destination "generic/platform=iOS Simulator" EXCLUDED_ARCHS=arm64

I hope this helps anyone who’s been puzzling over this issue!

Seeing It All in Roll20

I’ve been playing a lot of Dungeons & Dragons lately. You might have suspected if you saw my current Twitter account icon.

Since the Pandemic started, my campaigns have all taken place over the Internet. The way most people play D&D over the Internet is through a site called Roll20, which gives you easy access to your character information, maps, and a bunch of other things.

Roll20 is a very powerful website, free to use, and…a little fiddly. If you’re a GM (Game Master) for a game on Roll20, and you’ve already gone through the in-editor tutorials and tried things out for yourself, there’s a couple of steps I’ve found that you can follow to make the experience better for your players.

1. Visible Character Sheets
I’ve found it helpful for each player to be able to see, not just their own character sheet, but the character sheets of all the other players in the game.

When, as a GM, you first create a character sheet for a player, you need to set both who can see that sheet, and who that sheet is controlled and editable by.

These are modified by clicking the character name to open the sheet, then clicking the edit Edit button, and finally going to the In Player’s Journals and Can Be Edited & Controlled By sections, respectively.

Most GMs start out by setting both fields only to the individual player who owns the character.

But if, instead, you set In Player’s Journals to the special All Players option, that character will be visible to all existing players, including the controlling player, and any new players you add, without any further work from you. That’s what I would recommend.

Screenshot of the edit view for a character sheet. The "In Player's Journals" section has been set to a single token called "All Players", and the "Can Be Edited & Controlled By" section below it has been set to a single token called "Player 1".

2. Visible Token Labels
Now that you’ve made the character sheets, you or the controlling player can drag those character tokens on to the current map page. (Be sure to start the drag in the character’s name, not the icon.)

By default, this doesn’t show the name of the character, either to you or to the players.

You can change this, first, by clicking the token on the map to select it, then clicking the gear icon.

Under the Basic tab, in the Name section, there is a checkbox labeled Show nameplate? If you check that, the character’s name will be visible to both you and the controlling player.

Screenshot of the edit view for a map token, with the "Basic" tab selected. The "Name" section has a checkbox called "Show nameplate?" that has been checked.

If you want the label to be visible to everyone, which I would recommend, go to the Advanced tab and, in the Name section, check the See checkbox.

Screenshot of the edit view for a map token, with the "Advanced" tab selected. The "Name" section has a checkbox called "See" that has been checked.

Note the players can’t set these values for themselves. You need to do it as the GM, for every dragged-out token, individually.

Unfortunately, these changes aren’t “sticky”. Editor’s note: you can make these changes permanent, see my newer post for details. If someone drags out a second token for a character, say, on a new map page, these changes have to be made all over again. That’s annoying!

Instead, select the tokens that you’ve already edited and that you want to appear on another page, and copy them. Go to the second page, and then paste the tokens there. This way, you’ll have the tokens available on the second page, with all your changes.

I hope this is helpful!

Installing CocoaPods: What Works for Me

I’m making this post mostly to have a reminder for myself.

Recently, I wound up on a Mac that didn’t have CocoaPods installed.

The instructions on the Install tab of https://cocoapods.org/ say to type this on the command line:

sudo gem install cocoapods

That does work.

But I run in to problems if I then move directly on to the instructions on the Get Started tab and make myself a Podfile and type this on the command line:

pod install

In my experience, if I do this, I get an error saying it can’t find whatever Pod I specify, even if I know that Pod exists and is available to me.

Quite frustrating.

The solution, which I found, like every self-respecting programmer does, on Stack Overflow, is to type this:

pod setup

For me, this command takes forever and eventually errors out, but it succeeds enough to allow my pod install command to start working.

So if you didn’t already know the magic command: now you do.