Translating Objective-C to Swift in Xcode 9.0 Beta 2

I’m putting together a post comparing Mac drag and drop APIs and iOS drag and drop APIs.

To prepare, I took the Xcode CocoaDragAndDrop sample project (here, last modified in 2011 with note “Updated for Xcode 4”) and converted it to Swift (here) using the second beta of Xcode 9.0.1

Since I haven’t internalized the pattern between Objective-C and Swift method conversions, I was often frustrated by how to translate Objective-C method calls to Swift method calls.

While I was working on the project, it seemed that 4 times out of 5, when I tried to go to a class or protocol’s declaration in Apple’s headers and see its Swift-ified methods, Xcode would take me to the Objective-C header instead, even though I was starting off in a Swift file.

Of course, now that I’m trying to reproduce it to file a Radar, it doesn’t happen. I wonder if that’s because the final project has no Objective-C files in it at all.

It doesn’t help that the translations changed between Swift 3 and Swift 4.

For example, NSPasteboardTypeTIFF in Swift 3 is now NSPasteboard.PasteboardType.tiff in Swift 4, with a similar pattern for all its friends.

register(forDraggedTypes newTypes: [String]) is now registerForDraggedTypes(_ newTypes: [NSPasteboard.PasteboardType]).

Etc.

It’ll be nice to be working exclusively in Swift for the rest of this effort.


1. Feedback welcome! ↩︎

Refactoring a Massive View Controller

Now that Apple has announced a long-delayed revamp of refactoring in Xcode 9, it’s a good time to talk about my proudest refactoring moment over the last year:

I successfully broke apart a massive view controller in shipping code.

How? When you refactor a massive view controller, you know what you want your code to look like when you’re done. The places you’ll put all the diverse logic that’s currently twisted and squashed into one place.

Where you stumble over is how to get it there while you keep it all working.

When I looked at my massive view controller, I saw about four different areas of responsibility, four different related sets of methods and instance variables. But they weren’t cleanly divided: if I tried to take out any one of those areas, I’d be dragging in bits of the other areas along with it.

So I did.

I made a new class, let’s say for Login functionality, and pulled out all the methods and related ivars.

But since the existing spaghetti code that was left in the view controller wanted access to some of those methods and ivars, I couldn’t make a well-designed Login class at this stage. Instead, I left plenty of methods public that should be private, so they could be called by the old view controller. I left plenty of read/write properties public, so they could be called by the old view controller. I think I wound up exposing 12 of properties in all.

It was a total mess.

But it still worked exactly like it used to, no regressions, because it was exactly the same code — just moved.

Then, instead of trying to fix up the Login class, I moved on to the next. Maybe the next one was Network Connectivity. Maybe the one after that was Model Loading. Whatever they were, I pulled everything related to them out into their own classes, doing nothing except cutting and pasting the code from one place to another.

And as I went about it, a funny thing happened.

Even though I wasn’t trying to finalize the design or the APIs yet, for each new area I pulled out, I found I could refine the previously-extracted areas. Exposed properties that before were accessed seemingly at random, I could now see were only used by one of the specific areas I’d pulled out, and only at specific times. I could start to move properties around between the extracted classes, cut and paste them where they should go. I could move closer to the encapsulation I wanted.

All without breaking anything, because I was taking such tiny, straightforward, safe steps.

That meant, by the time everything was extracted, I was actually much closer to a final design than I had any right to be, given initial conditions.

At that point, I could finish the redesign through more conventional means.

Boilerplate in C++ and Swift

Moving from C++ to Objective-C was a revelation to me.

In C++, dynamic lookup was a chore. Because the language was relatively static, if you wanted to go from an arbitrary key to code, you had to write your own custom lookup table.

I remember writing a lot of registration code, a lot of boilerplate. For each class or method I wanted to look up, I would put an entry in the lookup table. Maybe it was part of an explicit factory method, maybe it was a C macro, maybe it was some sort of template metaprogramming magic. But there had to be something, and you had to write it every time.

Boilerplate, boilerplate, boilerplate. Over and over again.

In Objective-C, the dynamic lookup mechanism was built into the language: dynamic dispatch. Look up any class, any method, with just a string.

I remember reading somewhere — I wish I remember where — a post where someone pointed this out, that the C++ technique and the Objective-C technique both required lookup tables, but in the latter case, it was maintained for you by the Objective-C runtime. Objective-C didn’t reduce the inherent complexity, it just hid it, made it uniform.

The LLVM team, on the other hand, has been trying to kill dynamic dispatch for a long time.

Since ARC, calling arbitrary methods by string, the core of dynamic dispatch, by default triggers a warning.

And of course, in pure Swift, dynamic dispatch is completely absent. Everything must be known ahead of time by the compiler.

I understand why. They want to make it more safe.

Does that mean we’re seeing the reinvention of the custom lookup table in Swift?

Swift enumerations, for example, make this relatively easy, since you can pair methods, i.e. arbitrary code, with each enumeration case.

If I have to write a new enum case for every new class, though, then I consider that unnecessary boilerplate, a throwback to C++ techniques. Boilerplate.

And I wish we didn’t have to go back down that road.

WWDC Sessions by Women and Minorities

My one WWDC 2017 prediction:

People will be talking, afterward, about the number of women and minorities up on stage during the Keynote.

This is highlighted by the (unconfirmed?) report that the breakout woman of color from last year’s WWDC Keynote, Bozoma Saint John, is leaving Apple soon.

From what I have read and experienced, Apple, like many tech companies, is struggling to increase its diversity in any substantial way. Rows and rows of white dudes (such as myself, when I was there) working on Apple’s hardware and software.

The executive team you see at the Keynote is important, but I don’t see it changing anytime soon. So I’m looking at the rest of the sessions.

The people who give the regular talks at WWDC are the people who work on the thing, or their immediate managers, who usually contribute technically was well.

So when I’ve seen more women up on stage for those talks in the last couple years, I’ve been pretty happy with it. They aren’t tokens. Apple really is employing more women to work on their stuff.

So that’s what I’ll be looking for as I watch the sessions (remotely) this year: more women and minorities throughout.

Thoughts on Learning a Little Scala

In some ways, Scala is the functional improvement over Java that Swift is to Objective-C.

  • Based on functional concepts, when the previous language was primarily OO.
  • Updated, modern, compact syntax.
  • Strongly typed, but with type inference, so complex type declarations can be omitted.
  • Compiler much slower, since it is doing more.
  • Strong interop with legacy code.

In other ways, though, Java to Scala is more like the transition from C to Objective-C:

  • Standard types taken from previous language.
  • Runtime taken from previous language.
  • Compromises made in language design to accommodate previous language’s content.

Both Objective-C and Scala were languages invented outside the nurturing environment of a flagship corporation. They couldn’t reinvent the wheel. They didn’t have the resources. (Same could be said for C++.) So they had to find a “host”.

Java and Swift, on the other hand, had serious money behind them, and could do everything from scratch if they wanted. They could think big.

I believe you need both to make progress.

Apple is taking dramatic steps now. But they will eventually finish all the large-scale, cutting-edge elements they’re willing to sponsor for their business, and the pace of change will slow.

And then, once again, we’ll have to look outside of Apple for language innovation.

New Codebase, Who Dis?

I’ve found that just reading through a new codebase isn’t enough to get me comfortable with it.

I’ve even found that having it explained by the previous developers doesn’t do the trick.

What does “comfortable” mean? It means that I have an accurate mental model of it. I’ve internalized it. I don’t have to check the code or the documentation to know the following:

  • what the big features are
  • what it does well
  • what needs to be fixed about it
  • what looks bad but doesn’t need to be fixed right away
  • what the low-hanging fruits for code cleanup are
  • what OS features it doesn’t support, but should
  • what OS features it doesn’t support, and never will
  • what its predominate style is (or if it has one)
  • where the best place to put a helper method is

That list is just off the top of my head. I’m sure there’s more.

So, if just reading the code doesn’t work for me, what does?

Actually working on it.

Fixing bugs. Adding new features to existing code. Going through one full major release cycle, if possible.

Then I can start thinking concretely about making major improvements to it.

 

 

Twitter Image Descriptions

In my previous post, I talked about how Twitter could add an OCR capability to their system if they wanted to.

They haven’t, but they do have something related: the ability to add your own “image description” to the images in your tweets:

https://support.twitter.com/articles/20174660

You can add a full, separate image description for each image in a tweet, not just the first one.

Per that document, you have 420 characters to use for your image description.

You can’t add it for animated GIFs or videos, and you won’t be able to access the image description through the standard UI — only through things like screen readers.

And it doesn’t look like this functionality is available to third-party Twitter clients.

It’s not as good as the kind of OCR system I imagined. If there is text in your image, you’ll have to type it in yourself.

And if you use an image that has over about four hundred characters of text in it, you won’t be able to include all of it. That is a decent amount of text, however.