Thoughts on Learning a Little Scala

In some ways, Scala is the functional improvement over Java that Swift is to Objective-C.

  • Based on functional concepts, when the previous language was primarily OO.
  • Updated, modern, compact syntax.
  • Strongly typed, but with type inference, so complex type declarations can be omitted.
  • Compiler much slower, since it is doing more.
  • Strong interop with legacy code.

In other ways, though, Java to Scala is more like the transition from C to Objective-C:

  • Standard types taken from previous language.
  • Runtime taken from previous language.
  • Compromises made in language design to accommodate previous language’s content.

Both Objective-C and Scala were languages invented outside the nurturing environment of a flagship corporation. They couldn’t reinvent the wheel. They didn’t have the resources. (Same could be said for C++.) So they had to find a “host”.

Java and Swift, on the other hand, had serious money behind them, and could do everything from scratch if they wanted. They could think big.

I believe you need both to make progress.

Apple is taking dramatic steps now. But they will eventually finish all the large-scale, cutting-edge elements they’re willing to sponsor for their business, and the pace of change will slow.

And then, once again, we’ll have to look outside of Apple for language innovation.

New Codebase, Who Dis?

I’ve found that just reading through a new codebase isn’t enough to get me comfortable with it.

I’ve even found that having it explained by the previous developers doesn’t do the trick.

What does “comfortable” mean? It means that I have an accurate mental model of it. I’ve internalized it. I don’t have to check the code or the documentation to know the following:

  • what the big features are
  • what it does well
  • what needs to be fixed about it
  • what looks bad but doesn’t need to be fixed right away
  • what the low-hanging fruits for code cleanup are
  • what OS features it doesn’t support, but should
  • what OS features it doesn’t support, and never will
  • what its predominate style is (or if it has one)
  • where the best place to put a helper method is

That list is just off the top of my head. I’m sure there’s more.

So, if just reading the code doesn’t work for me, what does?

Actually working on it.

Fixing bugs. Adding new features to existing code. Going through one full major release cycle, if possible.

Then I can start thinking concretely about making major improvements to it.

 

 

Twitter Image Descriptions

In my previous post, I talked about how Twitter could add an OCR capability to their system if they wanted to.

They haven’t, but they do have something related: the ability to add your own “image description” to the images in your tweets:

https://support.twitter.com/articles/20174660

You can add a full, separate image description for each image in a tweet, not just the first one.

Per that document, you have 420 characters to use for your image description.

You can’t add it for animated GIFs or videos, and you won’t be able to access the image description through the standard UI — only through things like screen readers.

And it doesn’t look like this functionality is available to third-party Twitter clients.

It’s not as good as the kind of OCR system I imagined. If there is text in your image, you’ll have to type it in yourself.

And if you use an image that has over about four hundred characters of text in it, you won’t be able to include all of it. That is a decent amount of text, however.

Twitter OCR Bot: a Failed Proposal

It’s a common practice on Twitter to tweet pictures full of text. Screenshots of email, screenshots of Tumblr threads, screenshots of newspapers, screenshots of a television screen with a scrolling news ticker. Oftentimes the tweet itself has little to no text describing the image contents.

It’s bothered me for a while.

Why?

Because it means that people who rely on screen readers to understand the Internet are completely shut out: for example, people who are blind or otherwise visually impaired.

If I’m remembering correctly, one of the rationales from Twitter when they floated the idea of allowing longer tweets was precisely so that people could add the kind of text to a tweet that you now need to use an image for.

I had an idea about a way to fix or at least help with that, without requiring Twitter itself to make changes.

Too bad it’s a terrible idea.

The idea? Make a bot, e.g. @OCRBot, that when cc’ed on a tweet with an image, would run that image through OCR, and then tweet back to the tweet’s author with the text, split across multiple tweets if necessary.

At first, this seems like a wonderful idea. It doesn’t require the original tweet author to do anything, it doesn’t require Twitter to change its service in any way. The person who wants the text of the image just cc’s it to @OCRBot, and gets a tweet or a couple of tweets in response.

The trouble is, it can easily be weaponized, so that the auto-generated tweets are also directed to a harassment target. After writing up one way to do that, I realized I didn’t want to give lazy harassers any easy ideas. So I’m not going to go into details about it here. (Nor am I interested in people hashing it out in the comments.)

Suffice to say, I don’t think it would work as a third-party service for that reason, and wouldn’t advise anyone to try it.

It could, instead, be built in to Twitter clients, either from Twitter itself or from third-party companies.

Third-party Twitter client companies, however, probably already aren’t making much profit on those clients, and it would be unreasonable for them to shoulder the burden of such a service, which could be easily overloaded.

Twitter itself could handle the cost of such a service.

But, I haven’t see any indication of them heading in this direction.

Has anyone tried such a service that I’m just not aware of, or proposed anything similar?

Swift on the Server, Part 1

I’m not convinced Swift is going to be a long-term hit in server software.

The big push I’ve heard about is from IBM. In this recent talk, Chris Bailey gives some reasons to use Swift on the server:

  • It’s faster and uses less memory than some other technologies.
  • It has the potential to reduce communication errors when used for both the client and the server. Chris mentions the Mars Climate Orbiter as an example of such an error.

I personally don’t find these arguments compelling.

First of all, plenty of extremely popular technologies are not the most performant technologies. You choose them because they’re easier to develop in, easier to maintain, easier to keep up and running. If we wanted the very fastest, we’d still be writing server software in C.

Second, most current server software is written in a different language, and with different libraries, than the client software it talks to. People know how to solve this problem. Hint: switching to a new language isn’t necessary.

Third, native iPhone and Mac apps are an important but not overwhelming subset of the clients a server has to talk to. The Swift advantage vanishes if we’re talking about Android or Windows or web clients.

So is Swift going to be easier to develop in, easier to maintain, and easier to keep up and running than its competitors on the server?

Making it those things for server software is certainly not Apple’s priority. Their goal is to make it work for them, which means low-level OS software, frameworks, and native application development.

IBM can try to do this work. Chris’s talk is all about the extra steps they’ve taken, the extra projects they’ve written, to do just that.

But at some point, as part of their effort, IBM is going to want something from Apple, something from the Swift development effort, which clashes with what Apple thinks is important.

Who’s going to win that clash?

Hip to Be Squarespace

If you listen to any podcasts by members of the Apple community, you’ll eventually listen to a Squarespace ad.

When I was restarting this blog, I spent about a month on and off experimenting with using Squarespace. Give myself a clean break, you know?

Now, because starting a new blog would require moving over my old Powers of Observation content and my old Helpful Tiger content, I needed a system that would provide robust importing capabilities.

Squarespace is not that system.

Here are some of the problems I found when trying to use Squarespace to do those imports:

  • Multiple content problems with WordPress file imports, including not recognizing the returns after the first paragraph, not recognizing tags if there was a / in their enclosed contents, not converting links properly, and more.
  • Several times, when I tried a new import file, the import would just stop dead, with a status of “Waiting”, for two days or more at a time, when otherwise it took less than ten minutes. Their support line was unable to give me a reason or to fix it for me. Eventually, after multiple days of delay, the stalled import would finish without problems.
  • Their blog post editing tools would discard formatting from the imported posts, requiring me to add it in again if I did any manual touchups.
  • No ability to add tags to multiple posts at once.
  • Looking at my own posts in Safari would peg my Mac’s CPU at 100% or more.
  • Inability to link to the comments section of a post.

Finally, I just said, “Enough!” and decided to re-invest in WordPress.

And you know what?

The imports went just fine. Editing is much smoother. And there are far more and better tools.

Plus, it’s cheaper.

My experience might not have been typical, I’m happy to admit. If you’re not doing any importing, it might be fine. But from my perspective, I don’t know why anyone with any technical bent at all would choose Squarespace over WordPress.

Maybe that’s why they need so many ads?

History Repeating Itself

At my last job, I wanted to take some private company CocoaPods and merge them into the main company codebase. That way, I could make changes to interrelated classes with a single commit.

But the pods and the main codebase were all in different GitHub repositories.

The naive way to do this would just be to take all the pod doors files and copy them over to the main repository, and check them in as a new commit. But that would lose all the history of those files, which I didn’t want.

Instead, I decided to copy the GitHub history of the pod repositories over. Yup, you can combine completely unrelated GitHub repositories and retain all their histories, together, without mucking about in git internals. Thanks to Jens Ayton for telling me about the necessary steps.

I’ve created an extremely simple set of three GitHub repositories to show how it works.

WhiteProject and BlueProject are the stand-ins for the CocoaPods projects. They have but a single file in them, White.swift and Blue.swift, respectively.

RainbowProject is the stand-in for the main codebase. It’s a regular sample Xcode project, in this case a macOS command-line app.

You can see that color projects each have a commit history, for the creation of their Swift files and for the addition of some comments.

First thing I did was clone all three repositories locally, in the same parent directory.

Then, I created a branch in RainbowProject called add-white-project, so I could make a pull request of it later.

After that, I added a remote reference to the WhiteProject repository to RainbowProject, like this:

git remote add WhiteProject ../WhiteProject/

I make the connection via the two local copies of the repositories. I don’t know if there’s a way to accomplish this without using local copies.

Here’s what it looks like to have that remote reference, in SourceTree:

Table with header Remotes and two rows, first row WhiteProject and second row origin

Next, I went ahead and merged the remote repository into the local repository with this command:

git merge --allow-unrelated-histories -m 'Merge history from WhiteProject' WhiteProject/master

Note the following:

  • The --allow-unrelated-histories argument is needed by git 2.9 and higher according to this Stack Overflow answer and my own experience. I’ve got git 2.10 installed on my machine. Is that from an Xcode install or my own separate install? What version of git does come with Xcode? I can’t answer these questions, so your mileage may vary.
  • You need to specify both the remote repository and the branch in the remote repository, or it won’t work.

Here’s what it looks like in SourceTree after that merge:

Tree with root add-white-project and two branches, first branch from the RainbowProject repository with one commit, and second branch from the WhiteProject repository with two commits

Note the separate WhiteProject repository history is all there (all two commits, in our extremely simple example), and it’s hanging off of that merge commit we just made, all without obliterating the previous RainbowProject history, either. That’s what we want.

From here, I made a pull request, as you would do for a Real Project at Work. Here’s what that looks like on the GitHub website:

Screenshot of GitHub pull request user interface including PR text and list of commits.

I merged that, and then removed the remote reference, which was no longer needed:

git remote remove WhiteProject

At that point, I was done with the WhiteProject merge, and ready to perform the same steps for the BlueProject.

Now, the steps I followed for the merge at work were much more complicated than this simple example. In particular, I had to take what used to be separate static libraries whose files were managed by CocoaPods, and add them to my main Xcode project directly.

From that more complex scenario, I have a bunch of tips:

  • Make sure the files you’re merging in are all in different locations than the existing files, otherwise there’ll be conflicts.
  • Image and other resource files that couldn’t be in asset catalogs as long as they were in a pod can now be put into the asset catalog of the main app (and should be).
  • Your pod source code file might use [NSBundle bundleForClass:[MyPodClass class]] to get the bundle to load a resource from. You should change that to the main bundle, [NSBundle mainBundle], where you can’t just replace it with nil, like in [UIStoryboard storyboardWithName:bundle:].
  • Check whether you’re loading the pod bundle explicitly for anything and change your code to use other mechanisms.

That’s it! Let me know if you have any questions.

Attack the Block

How many readers of this blog know that Objective-C blocks are initially created on the stack, unlike every other type of Objective-C object? I believe this is for performance reasons.

It used to be a bigger deal, before ARC. Why? Because those stack-based blocks would be deallocated once their scope ended. If you tried to reference a stack-based block outside its enclosing scope, your app would crash and burn.

To get around this, you had to send a copy message to the block, which would perform a special sort of copy to copy it to the heap instead, like every other Objective-C object. Then it could be passed around, because it wasn’t tied to the stack’s scope anymore. Of course, then you’d also be on the hook for sending it a release message, or you’d have a memory leak.

That’s why, if you have a block property, you’re supposed to use the copy attribute, not the retain (now strong) attribute:

typedef void (^MyBlock)();

@interface MyClass : NSObject

@property (nonatomic, copy) MyBlock myBlock;

@end

All that’s water under the bridge with ARC, however.

ARC adds those copy calls for you, in the same way that it adds retain and release calls for regular Objective-C objects. You’ll never have to worry about using a stack-based block outside of its scope accidentally, because ARC will never let you do that.

The result? Now, when I mention the dangers of stack-based blocks to my younger coworkers, they have no idea what I’m talking about.

Interview Ballyhoo

I did quite a bit of interviewing recently before I got my new job.

I’ve come to believe your success depends much more on the attitude of the interviewer than how much you prepare. Anyone can find a gotcha question you can’t answer. Anyone can twist your lack of instant recall of a topic into an irrecoverable failure. You simply can’t know everything off the top of your head.

And on the flip side, anyone could talk you through your nervousness or your sudden blanking on things-you-knew-an-hour-ago, if they really wanted to. Anyone could connect with you and get you to open up about what you understand.

Could, but often won’t.

So while you should definitely do the preparations that they advise you to do — many companies give you fairly detailed lists of things to study — you shouldn’t kick yourself when you get rejection emails.

And you will get them, and they’ll almost never give you very helpful feedback. That just seems to be the way it is, however frustrating.

Like a Beacon in the Dark

This might be old news to my readers, but…I recently had to test beacon support for an iOS application.

I learned that you can do so without actually buying separate beacon hardware, by taking an iPhone and making it broadcast like a beacon.

I did this by installing a freeware application called GemTot SDK. It’s from a company called PassKit which sells GemTot Beacons.

You can also follow their blog post’s instructions for building their Xcode project yourself and running it on your phone.

But I figured, absent taking the time to inspect the code thoroughly myself, it was safer to use the version that had already been through App Store review.

Here are the steps:

  1. Search for “GemTot SDK” on the iOS App Store, download it, install, run. (There are separate iPad and iPhone versions.)
  2. In the iPhone version, tap the “Beacon” tab all the way to the right.
  3. Set the “Broadcast Signal” switch to On.

That’s it! You have a functioning beacon.

In the tests I did, I believe I needed to set either the Major Value or the Minor Value to something other than zero. So if things aren’t working, you could try that, though that doesn’t appear to be necessary in general.

If you need the UUID of the beacon, you can tap on the tiny beacon text near the bottom of the screen, and an alert will pop up to tell you it’s been copied to the clipboard.

If you want a quick and dirty way to tell that the beacon is broadcasting, take a look at https://github.com/mlwelles/BeaconScanner, which has a pre-built binary in addition to buildable source code (and a nicely informative README with a bunch of links).

(Build and) run that Mac app, and check the window to see if your beacon’s there.

If you want to test your iOS app, though, you’ll need a second phone (or other iOS device).