Refactoring a Massive View Controller

Now that Apple has announced a long-delayed revamp of refactoring in Xcode 9, it’s a good time to talk about my proudest refactoring moment over the last year:

I successfully broke apart a massive view controller in shipping code.

How? When you refactor a massive view controller, you know what you want your code to look like when you’re done. The places you’ll put all the diverse logic that’s currently twisted and squashed into one place.

Where you stumble over is how to get it there while you keep it all working.

When I looked at my massive view controller, I saw about four different areas of responsibility, four different related sets of methods and instance variables. But they weren’t cleanly divided: if I tried to take out any one of those areas, I’d be dragging in bits of the other areas along with it.

So I did.

I made a new class, let’s say for Login functionality, and pulled out all the methods and related ivars.

But since the existing spaghetti code that was left in the view controller wanted access to some of those methods and ivars, I couldn’t make a well-designed Login class at this stage. Instead, I left plenty of methods public that should be private, so they could be called by the old view controller. I left plenty of read/write properties public, so they could be called by the old view controller. I think I wound up exposing 12 of properties in all.

It was a total mess.

But it still worked exactly like it used to, no regressions, because it was exactly the same code — just moved.

Then, instead of trying to fix up the Login class, I moved on to the next. Maybe the next one was Network Connectivity. Maybe the one after that was Model Loading. Whatever they were, I pulled everything related to them out into their own classes, doing nothing except cutting and pasting the code from one place to another.

And as I went about it, a funny thing happened.

Even though I wasn’t trying to finalize the design or the APIs yet, for each new area I pulled out, I found I could refine the previously-extracted areas. Exposed properties that before were accessed seemingly at random, I could now see were only used by one of the specific areas I’d pulled out, and only at specific times. I could start to move properties around between the extracted classes, cut and paste them where they should go. I could move closer to the encapsulation I wanted.

All without breaking anything, because I was taking such tiny, straightforward, safe steps.

That meant, by the time everything was extracted, I was actually much closer to a final design than I had any right to be, given initial conditions.

At that point, I could finish the redesign through more conventional means.

Boilerplate in C++ and Swift

Moving from C++ to Objective-C was a revelation to me.

In C++, dynamic lookup was a chore. Because the language was relatively static, if you wanted to go from an arbitrary key to code, you had to write your own custom lookup table.

I remember writing a lot of registration code, a lot of boilerplate. For each class or method I wanted to look up, I would put an entry in the lookup table. Maybe it was part of an explicit factory method, maybe it was a C macro, maybe it was some sort of template metaprogramming magic. But there had to be something, and you had to write it every time.

Boilerplate, boilerplate, boilerplate. Over and over again.

In Objective-C, the dynamic lookup mechanism was built into the language: dynamic dispatch. Look up any class, any method, with just a string.

I remember reading somewhere — I wish I remember where — a post where someone pointed this out, that the C++ technique and the Objective-C technique both required lookup tables, but in the latter case, it was maintained for you by the Objective-C runtime. Objective-C didn’t reduce the inherent complexity, it just hid it, made it uniform.

The LLVM team, on the other hand, has been trying to kill dynamic dispatch for a long time.

Since ARC, calling arbitrary methods by string, the core of dynamic dispatch, by default triggers a warning.

And of course, in pure Swift, dynamic dispatch is completely absent. Everything must be known ahead of time by the compiler.

I understand why. They want to make it more safe.

Does that mean we’re seeing the reinvention of the custom lookup table in Swift?

Swift enumerations, for example, make this relatively easy, since you can pair methods, i.e. arbitrary code, with each enumeration case.

If I have to write a new enum case for every new class, though, then I consider that unnecessary boilerplate, a throwback to C++ techniques. Boilerplate.

And I wish we didn’t have to go back down that road.

WWDC Sessions by Women and Minorities

My one WWDC 2017 prediction:

People will be talking, afterward, about the number of women and minorities up on stage during the Keynote.

This is highlighted by the (unconfirmed?) report that the breakout woman of color from last year’s WWDC Keynote, Bozoma Saint John, is leaving Apple soon.

From what I have read and experienced, Apple, like many tech companies, is struggling to increase its diversity in any substantial way. Rows and rows of white dudes (such as myself, when I was there) working on Apple’s hardware and software.

The executive team you see at the Keynote is important, but I don’t see it changing anytime soon. So I’m looking at the rest of the sessions.

The people who give the regular talks at WWDC are the people who work on the thing, or their immediate managers, who usually contribute technically was well.

So when I’ve seen more women up on stage for those talks in the last couple years, I’ve been pretty happy with it. They aren’t tokens. Apple really is employing more women to work on their stuff.

So that’s what I’ll be looking for as I watch the sessions (remotely) this year: more women and minorities throughout.

Thoughts on Learning a Little Scala

In some ways, Scala is the functional improvement over Java that Swift is to Objective-C.

  • Based on functional concepts, when the previous language was primarily OO.
  • Updated, modern, compact syntax.
  • Strongly typed, but with type inference, so complex type declarations can be omitted.
  • Compiler much slower, since it is doing more.
  • Strong interop with legacy code.

In other ways, though, Java to Scala is more like the transition from C to Objective-C:

  • Standard types taken from previous language.
  • Runtime taken from previous language.
  • Compromises made in language design to accommodate previous language’s content.

Both Objective-C and Scala were languages invented outside the nurturing environment of a flagship corporation. They couldn’t reinvent the wheel. They didn’t have the resources. (Same could be said for C++.) So they had to find a “host”.

Java and Swift, on the other hand, had serious money behind them, and could do everything from scratch if they wanted. They could think big.

I believe you need both to make progress.

Apple is taking dramatic steps now. But they will eventually finish all the large-scale, cutting-edge elements they’re willing to sponsor for their business, and the pace of change will slow.

And then, once again, we’ll have to look outside of Apple for language innovation.

New Codebase, Who Dis?

I’ve found that just reading through a new codebase isn’t enough to get me comfortable with it.

I’ve even found that having it explained by the previous developers doesn’t do the trick.

What does “comfortable” mean? It means that I have an accurate mental model of it. I’ve internalized it. I don’t have to check the code or the documentation to know the following:

  • what the big features are
  • what it does well
  • what needs to be fixed about it
  • what looks bad but doesn’t need to be fixed right away
  • what the low-hanging fruits for code cleanup are
  • what OS features it doesn’t support, but should
  • what OS features it doesn’t support, and never will
  • what its predominate style is (or if it has one)
  • where the best place to put a helper method is

That list is just off the top of my head. I’m sure there’s more.

So, if just reading the code doesn’t work for me, what does?

Actually working on it.

Fixing bugs. Adding new features to existing code. Going through one full major release cycle, if possible.

Then I can start thinking concretely about making major improvements to it.

 

 

Twitter Image Descriptions

In my previous post, I talked about how Twitter could add an OCR capability to their system if they wanted to.

They haven’t, but they do have something related: the ability to add your own “image description” to the images in your tweets:

https://support.twitter.com/articles/20174660

You can add a full, separate image description for each image in a tweet, not just the first one.

Per that document, you have 420 characters to use for your image description.

You can’t add it for animated GIFs or videos, and you won’t be able to access the image description through the standard UI — only through things like screen readers.

And it doesn’t look like this functionality is available to third-party Twitter clients.

It’s not as good as the kind of OCR system I imagined. If there is text in your image, you’ll have to type it in yourself.

And if you use an image that has over about four hundred characters of text in it, you won’t be able to include all of it. That is a decent amount of text, however.

Twitter OCR Bot: a Failed Proposal

It’s a common practice on Twitter to tweet pictures full of text. Screenshots of email, screenshots of Tumblr threads, screenshots of newspapers, screenshots of a television screen with a scrolling news ticker. Oftentimes the tweet itself has little to no text describing the image contents.

It’s bothered me for a while.

Why?

Because it means that people who rely on screen readers to understand the Internet are completely shut out: for example, people who are blind or otherwise visually impaired.

If I’m remembering correctly, one of the rationales from Twitter when they floated the idea of allowing longer tweets was precisely so that people could add the kind of text to a tweet that you now need to use an image for.

I had an idea about a way to fix or at least help with that, without requiring Twitter itself to make changes.

Too bad it’s a terrible idea.

The idea? Make a bot, e.g. @OCRBot, that when cc’ed on a tweet with an image, would run that image through OCR, and then tweet back to the tweet’s author with the text, split across multiple tweets if necessary.

At first, this seems like a wonderful idea. It doesn’t require the original tweet author to do anything, it doesn’t require Twitter to change its service in any way. The person who wants the text of the image just cc’s it to @OCRBot, and gets a tweet or a couple of tweets in response.

The trouble is, it can easily be weaponized, so that the auto-generated tweets are also directed to a harassment target. After writing up one way to do that, I realized I didn’t want to give lazy harassers any easy ideas. So I’m not going to go into details about it here. (Nor am I interested in people hashing it out in the comments.)

Suffice to say, I don’t think it would work as a third-party service for that reason, and wouldn’t advise anyone to try it.

It could, instead, be built in to Twitter clients, either from Twitter itself or from third-party companies.

Third-party Twitter client companies, however, probably already aren’t making much profit on those clients, and it would be unreasonable for them to shoulder the burden of such a service, which could be easily overloaded.

Twitter itself could handle the cost of such a service.

But, I haven’t see any indication of them heading in this direction.

Has anyone tried such a service that I’m just not aware of, or proposed anything similar?