Here are the favorites of mine that are still around and active.
Erica Sadun (Twitter)
I’ve known about Erica’s work since the 2013 Edge Cases episode “Rectangles on a String”. (I even know how to pronounce her last name, even if she does pronounce tuple wrong.) Her blog updates several times a week, often about some detail of Swift development. She’s written a bunch of books, mostly focused on developers, and one book on Swift.
Becky Hansmeyer (Twitter)
From her own description, her blog is “my own little place to comment on Apple & general technology news, as well as what it’s like to be a novice developer with no prior programming experience.” Her tag line is “100% grass-fed Swift”.
And finally, a great blogger who’s been very active recently who’s not on the list:
I found an interesting (to me) aspect of Swift/Objective-C interactions this week.
Take this Objective-C method:
+ (nullable NSData *)dataWithString:(nullable NSString *)string error:(NSError **)error
It uses the standard Apple pattern of having both a return value and an error. (I left out the error’s nullability annotations for brevity, as Apple always assumes them.)
In theory — and, if I’m remembering correctly, according to Apple guidelines — first, you’re supposed to check if the return value is invalid. Only once you’ve verified that it’s invalid should you check to see if there’s an error.
And as far as I’m been aware, there’s never been any assumption that you’ll get an error. That’s why, throughout your Objective-C code, you always have to check the return value and treat that as gospel.
If you use this method in Swift, the auto-generated Swift signature is:
func data(with string: String?) throws -> Data
I mean, besides the fact that Apple’s compiler/runtime magic smoothly converts between the Objective-C’s last-parameter-is-an-error-pointer pattern and Swift’s “throws” pattern.
The return type doesn’t allow for nil anymore.
You can’t check for an invalid value, if “invalid” means nil.
Instead, you can only assume that the original Objective-C implementation will “throw” an error if there is a problem.
Now, go back to your original Objective-C method. What if you return nil but don’t set the error? What does Swift do?
It does something clever.
In my testing, even when you haven’t set an error, the Swift translation layer throws an error anyway.
If you log it, it’s called
It’s got a
Foundation._GenericObjCError and a
Feels a bit like a hack, doesn’t it?
But it does prevent the problem of old Objective-C code not indicating the desired result under Swift.
I’m putting together a post comparing Mac drag and drop APIs and iOS drag and drop APIs.
Since I haven’t internalized the pattern between Objective-C and Swift method conversions, I was often frustrated by how to translate Objective-C method calls to Swift method calls.
While I was working on the project, it seemed that 4 times out of 5, when I tried to go to a class or protocol’s declaration in Apple’s headers and see its Swift-ified methods, Xcode would take me to the Objective-C header instead, even though I was starting off in a Swift file.
Of course, now that I’m trying to reproduce it to file a Radar, it doesn’t happen. I wonder if that’s because the final project has no Objective-C files in it at all.
It doesn’t help that the translations changed between Swift 3 and Swift 4.
NSPasteboardTypeTIFF in Swift 3 is now
NSPasteboard.PasteboardType.tiff in Swift 4, with a similar pattern for all its friends.
register(forDraggedTypes newTypes: [String]) is now
registerForDraggedTypes(_ newTypes: [NSPasteboard.PasteboardType]).
It’ll be nice to be working exclusively in Swift for the rest of this effort.
Now that Apple has announced a long-delayed revamp of refactoring in Xcode 9, it’s a good time to talk about my proudest refactoring moment over the last year:
I successfully broke apart a massive view controller in shipping code.
How? When you refactor a massive view controller, you know what you want your code to look like when you’re done. The places you’ll put all the diverse logic that’s currently twisted and squashed into one place.
Where you stumble over is how to get it there while you keep it all working.
When I looked at my massive view controller, I saw about four different areas of responsibility, four different related sets of methods and instance variables. But they weren’t cleanly divided: if I tried to take out any one of those areas, I’d be dragging in bits of the other areas along with it.
So I did.
I made a new class, let’s say for Login functionality, and pulled out all the methods and related ivars.
But since the existing spaghetti code that was left in the view controller wanted access to some of those methods and ivars, I couldn’t make a well-designed Login class at this stage. Instead, I left plenty of methods public that should be private, so they could be called by the old view controller. I left plenty of read/write properties public, so they could be called by the old view controller. I think I wound up exposing 12 of properties in all.
It was a total mess.
But it still worked exactly like it used to, no regressions, because it was exactly the same code — just moved.
Then, instead of trying to fix up the Login class, I moved on to the next. Maybe the next one was Network Connectivity. Maybe the one after that was Model Loading. Whatever they were, I pulled everything related to them out into their own classes, doing nothing except cutting and pasting the code from one place to another.
And as I went about it, a funny thing happened.
Even though I wasn’t trying to finalize the design or the APIs yet, for each new area I pulled out, I found I could refine the previously-extracted areas. Exposed properties that before were accessed seemingly at random, I could now see were only used by one of the specific areas I’d pulled out, and only at specific times. I could start to move properties around between the extracted classes, cut and paste them where they should go. I could move closer to the encapsulation I wanted.
All without breaking anything, because I was taking such tiny, straightforward, safe steps.
That meant, by the time everything was extracted, I was actually much closer to a final design than I had any right to be, given initial conditions.
At that point, I could finish the redesign through more conventional means.
Moving from C++ to Objective-C was a revelation to me.
In C++, dynamic lookup was a chore. Because the language was relatively static, if you wanted to go from an arbitrary key to code, you had to write your own custom lookup table.
I remember writing a lot of registration code, a lot of boilerplate. For each class or method I wanted to look up, I would put an entry in the lookup table. Maybe it was part of an explicit factory method, maybe it was a C macro, maybe it was some sort of template metaprogramming magic. But there had to be something, and you had to write it every time.
Boilerplate, boilerplate, boilerplate. Over and over again.
In Objective-C, the dynamic lookup mechanism was built into the language: dynamic dispatch. Look up any class, any method, with just a string.
I remember reading somewhere — I wish I remember where — a post where someone pointed this out, that the C++ technique and the Objective-C technique both required lookup tables, but in the latter case, it was maintained for you by the Objective-C runtime. Objective-C didn’t reduce the inherent complexity, it just hid it, made it uniform.
The LLVM team, on the other hand, has been trying to kill dynamic dispatch for a long time.
Since ARC, calling arbitrary methods by string, the core of dynamic dispatch, by default triggers a warning.
And of course, in pure Swift, dynamic dispatch is completely absent. Everything must be known ahead of time by the compiler.
I understand why. They want to make it more safe.
Does that mean we’re seeing the reinvention of the custom lookup table in Swift?
Swift enumerations, for example, make this relatively easy, since you can pair methods, i.e. arbitrary code, with each enumeration case.
If I have to write a new enum case for every new class, though, then I consider that unnecessary boilerplate, a throwback to C++ techniques. Boilerplate.
And I wish we didn’t have to go back down that road.
My one WWDC 2017 prediction:
People will be talking, afterward, about the number of women and minorities up on stage during the Keynote.
This is highlighted by the (unconfirmed?) report that the breakout woman of color from last year’s WWDC Keynote, Bozoma Saint John, is leaving Apple soon.
From what I have read and experienced, Apple, like many tech companies, is struggling to increase its diversity in any substantial way. Rows and rows of white dudes (such as myself, when I was there) working on Apple’s hardware and software.
The executive team you see at the Keynote is important, but I don’t see it changing anytime soon. So I’m looking at the rest of the sessions.
The people who give the regular talks at WWDC are the people who work on the thing, or their immediate managers, who usually contribute technically was well.
So when I’ve seen more women up on stage for those talks in the last couple years, I’ve been pretty happy with it. They aren’t tokens. Apple really is employing more women to work on their stuff.
So that’s what I’ll be looking for as I watch the sessions (remotely) this year: more women and minorities throughout.
In some ways, Scala is the functional improvement over Java that Swift is to Objective-C.
- Based on functional concepts, when the previous language was primarily OO.
- Updated, modern, compact syntax.
- Strongly typed, but with type inference, so complex type declarations can be omitted.
- Compiler much slower, since it is doing more.
- Strong interop with legacy code.
In other ways, though, Java to Scala is more like the transition from C to Objective-C:
- Standard types taken from previous language.
- Runtime taken from previous language.
- Compromises made in language design to accommodate previous language’s content.
Both Objective-C and Scala were languages invented outside the nurturing environment of a flagship corporation. They couldn’t reinvent the wheel. They didn’t have the resources. (Same could be said for C++.) So they had to find a “host”.
Java and Swift, on the other hand, had serious money behind them, and could do everything from scratch if they wanted. They could think big.
I believe you need both to make progress.
Apple is taking dramatic steps now. But they will eventually finish all the large-scale, cutting-edge elements they’re willing to sponsor for their business, and the pace of change will slow.
And then, once again, we’ll have to look outside of Apple for language innovation.