iPad Pro Initial Thoughts

I’ve been working on an app intended for use with the Apple Pencil, so I went to the store and picked up an iPad Pro this morning. (Sadly no Apple Pencil or Keyboard, both are deeply backlogged it seems.) At my desk I have a Mac Pro, I carry a Macbook Pro for working on the go, and I have an iPhone 6 Plus and iPad Air 2, so I’ve been thinking a lot about how the iPad Pro fits in with my workflow as it is now.

I might publish more thoughts on it as I spend more time with the device, but I’ve already had a few reactions and thoughts on it, both good and bad.

Good: The Hardware

On the outside it looks a lot like a bigger iPad Air 2, which isn’t a bad thing. Apple has added speakers on the “top” near the lock button, and the “bottom” near the Lightening port (or the left and the right of the iPad if you hold it in landscape.) There are two sets of holes on each edge for stereo sound in both orientations.

The speakers sounds very good for an iPad. The bass is audible, and the volume is much much higher than my iPad Air 2. I’m not sure if it sounds as good as the built in audio on my Macbook Pro, but it’s at least pretty close. The built in output is not a replacement for a decent pair of speakers, but it sounds great for a portable device. My only complaint is that Apple is still opting for side mounted speakers instead of front facing speakers. I hold the device by the sides, and it’s very easy for my hands to cover the speakers and for the sound to become muddled.

I’m writing this without an Apple Keyboard or Pencil, but I’ll say the typing experience is miserable without them. Worse than the iPad Air. The keyboard is simply too big on the larger screen. Typing two handed is bad enough, but with one hand it’s unbearable. I’m not this is going to be a good replacement for a laptop even with a physical keyboard, but if you’re going to be doing anything as basic as typing medium sized emails, do yourself a favor and get the hardware keyboard. Long term, I’d love to see Apple add handwriting recognition, even if it’s not super. They at least have a starting point with the Mac’s handwriting recognition that’s available for graphics tablets.

The performance is good. I’ve seen the synthetic benchmarks that make the performance look very favorable compared to the iPad Air 2. But some of the numbers I’ve seen in running my own apps indicate that performance of applications may actually be imperceptibly slower. The extra CPU gains Apple made with the A9 may be getting used up driving the larger display. The iPad Air 2 always felt like a snappy device, so if Apple is just able to deliver the same user experience on the iPad Pro, it’s not a significant issue. But if you’re coming from an iPad Air 2 I’m not convinced things are going to feel significantly faster.

Speaking of the display size… It’s big. I told someone earlier it feels like I’ve been given a novelty giant iPad as a joke. Not in a bad way, I like the extra real estate, but it’s not an easy to carry device like the smaller iPads. Most of the time I use it I let it lay completely flat on a desk instead of holding it (which makes me regret not buying the Smart Case to prop it up with.)

I’ve been tinkering with creative apps on it, and the extra screen size is great. As I mentioned, I don’t have my Apple Pencil yet, and I’m sure the hardware will feel even better once it arrives.

The one thing I’d like to see on a future iPad Air is support for Thunderbolt, and beyond that support for pointing devices. One impressive thing about the Surface Pro is the transition it can make to a desktop PC when you plug it into a docking station. It would be nice to be able to plug an iPad Pro into a Thunderbolt display, and make use of the wired network, keyboard, mouse and other accessories.

Bad: The Software

When I talked about the hardware, I mentioned a lot about how it just felt like a bigger iPad Air 2. This is a good thing. With the software, it’s pretty much the same thing: it feels like a bigger iPad Air 2. This is a bad thing.

Originally I was on the fence about whether I should buy an iPad Pro or a Surface Pro. The attractive thing about the Surface was the lack of boundaries put in place by the software. Want to run real Photoshop with a bunch of windows? Go ahead! Mount network shares or plug in a USB printer? No problem! Run a DOS emulator to play a 20 year old game that happens to be touch friendly? Go for it!

A lot of apps have been updated, but there are still some strange gaps. Garageband doesn’t seem to be updated for the iPad Pro screen. Neither has the WWDC app. (One of my favorite third party games, Blizzard’s Hearthstone, doesn’t seem to be either.) I was expecting a premier Apple application like Garageband to be updated before launch. Apps like Keynote have been updated, and they look great on the display. Apps that haven’t been updated simply appear to be stretched, and they look pretty clearly pixelated compared to other modernized applications that look brilliant on the iPad Pro display.

Some apps that have been updated have an annoying habit of leaving the additional space empty. Apps like Messages, Twitter and News all have the habit of dealing with the extra space by just leaving ridiculous margins around content. I’m hoping in time this issue gets fixed.

The big problem with iOS on the iPad Pro is it still struggles with the productivity basics. Multitasking has been nice on the iPad Air 2, and it’s certainly better on the iPad Pro. But it still can only run two apps at the same time. Navigating between applications is slow and cumbersome. And worse yet, you can still only have one window an application open at a time. Want to compare two Excel spreadsheets side by side at the same time? Nope, out of luck.

Initial setup was also not great as I realized how fragmented applications have become. Panic and Adobe both have excellent apps on the iPad Pro, but both have their own sync services with their own separate logins because Apple has placed restrictions on iCloud usage, and doesn’t provide any sort of single sign on service to make up the gap. (And to be clear: I’m not blaming Adobe or Panic for a situation that is rooted in how Apple treats Mac applications.) I dug into the Adobe apps only to realize I didn’t have my stock artwork available. I couldn’t login to my network share to copy the artwork down, and I couldn’t download a zip of it from the internet because there is no app to decompress the zip, and no filesystem to decompress it to. Adobe seems to have a way to load the artwork into their proprietary cloud, but I haven’t done so yet, and I shouldn’t have to set up a new proprietary cloud system for every application just to load some files in.

The iPad Pro still shares the same basic problem as it’s older iPad Air 2 sibling: Productivity on the device is killed by a thousand tiny paper cuts. I’m not trying to say you shouldn’t buy one, but I’m saying that you should expect to have the same productivity on it as you would an iPad Air 2. The screen size can’t solve the productivity issues without the software.

I’ll revisit this when the Apple Pencil comes out. I’ve heard really great things about it, and I’m sure for artists this will be a great supplemental device. But I don’t think anything about the iPad Pro has changed to make it a dramatically better PC replacement device than the iPad Air 2. If the iPad Air 2 has been a good PC replacement for you, the iPad Pro will continue to be, but with a larger screen size. Otherwise, Apple’s continued resistance to making iOS more serious for professional workflows will just slow you down compared to a Macbook.

I don’t mean to be too down on the iPad Pro. I’ve mostly been talking about the iPad Pro as a PC replacement because Apple has been talking about the iPad Pro as a PC replacement. The hardware is great, and I can definitely see some sort of future here. I’m not totally convinced that a touch based tablet can take the place of a laptop with dedicated keyboard and trackpad (something Apple themselves have repeatedly said in response to other faux-laptop tablet combos like the Surface Pro), but for me it’s easy to see this as a good ultra portable device. And as a developer, I see all sorts of cool things I could do on a touch based device this large and this powerful. But as a user, the software still holds me back from getting things done as efficiently as I could on a laptop. I know my needs are greater than most PC users, but I’m just not convinced that the iPad Pro has changed the decision making process someone goes through for buying a tablet vs. buying a PC.

Swift Needs KVO/KVC

I’m just finishing up my first app store bound project that was written in Swift. It’s nothing hugely exciting, just a giant calculator sort of application. Why I chose Swift is that Swift’s static typing really made me think about the data layer, and how data flows through the application. What I missed terribly was KVO/KVC, and I’m not alone. Brent Simmons has also mentioned this, but as someone who’s used a lot of KVO and KVC over the years, I find that it’s helped me ship code a lot more quickly, and has been one of the most valuable features of the Mac frameworks. A lot of developers who are new to the platform aren’t aware of these constructs.

The idea is something like this: We’re done a really good job of optimizing the model layer of Model/View/Controller applications. And Swift has done an amazing job. Static typing provides huge advantages in reliability and coherency. But the Obj-C philosophy is really about re-usable components. In that philosophy, components written by one vendor need a way to seamlessly talk to another, and this is really where Swift and static typing fall flat. A view from one vendor or component needs a way to render data from a model from another component. We find this even in the system where a component like CoreData needs to be passed into a controller where it needs to be searched, or…

Hold on. I can hear the Swift developers already. “We have protocols and extensions for that! I can make a component from one source talk to a component from another source. All I need to do is define a protocol in an extension and I can have my static typing and everything!”

Ok. Let’s go down the rabbit hole.

The Swift Protocol Route

Let’s take a classic case that is actually a scenario that Apple shipped on the Mac in Mac OS 10.4. I want to have a controller, that given an input of an array of types, will filter the array based on a search term and output the result. The key here is my search controller doesn’t know the input type beforehand (maybe it came from a different vendor) and my input types don’t know about the search controller. I want to have a re-usable search controller, that I can use across all my projects, with minimal integration effort to save implementation time.

Using protocols, you might define a new protocol called “Searchable”. You extend or modify your existing model objects to follow the protocol. Under the “Searchable” protocol, objects would have to implement a function that receives a search term string input and return true or false based on if the object thinks it matches the search term. Easy.

But there are a few problems with this approach. The controller has become blind to how the search is actually performed, which isn’t what I wanted at all. The idea was that the controller would perform the search logic for me so I didn’t have to continuously rewrite it, and now I’m rewriting it for every searchable object in my project. If I need search to be customizable, where the user was selecting which fields they wanted to search, or selecting options like case sensitive or starts with/contains search, those options now need to be sent down into each Searchable object, and then logic written in each object to deal with that. Reusable components was supposed to make my code easier to write, and this sounds worse, not better.

Maybe I could try and flip this around. Instead of having extensions for my objects, I can have a search controller object that I subclass, and fill in with details about my objects. But I’d still have the same problem. I’m writing a lot of search logic all over again, when the point is I want to reuse my search logic between applications.

(If you’ve used NSPredicate, you probably know where this is going.)

Function Pointers

Alright, so clearly we were trying to implement this all in a naive way. We can do multiple searchable fields. When the search controller calls in to our Searchable object, we’ll provide it back an map of function pointers to searchable values, associated with a name for each field. This way all the logic stays in the controller. It just has to call the function pointers to get values, decide if the object meets the search criteria, and then either save or discard it. Easy. And we are getting closer to a workable solution, but now we have a few new problems.

Live search throws a wrench into this whole scheme. Not only do we need a way to know if an object meets the search criteria, but now we also need a way of knowing if an object’s properties have changed that could make it’s inclusion in our search change. This is especially important if I have multiple views. Maybe I have a form and a graph open for my model objects in different windows. If I change an entry in the form, I’d want the graph to possibly live update. And the form view and the graph view might have no knowledge of each other. So we need a way to callback to an interested observer of the object when a value changes. We could use a timer to check every second or so for changed values, but in some scenarios that could be a very expensive and needless operation. So while that would work, performance and battery life would significantly suffer. And it’s more code we don’t want to write.

There’s also the issue of nested values. Maybe what I’m searching are objects that represent employees, but now I also want to search on the name of the department employees belong to. In my object graph, it’s very likely that departments will be another model object type that will have a relationship with employee objects. So now I’m not just looking at returning function pointers to not just my employee objects, but department objects they belong to. And now I need to communicate changes not only in my object’s own values, but changes in it’s relationships to other objects.

Also there is the small issue of this approach not working with properties. As far as I know, you can’t create an function pointer to a property. So now I need to wrap all my properties with functions.

This is getting complicated again. Once again I’m writing a lot of code, and not saving any time at all. There has got to be a better way.

Key Value Coding

Well fortunately after years of going through this same mess in other languages, Apple came up with Key Value Coding as a solution.

Key Value Coding is extremely simple: It’s a protocol that allows any Obj-C object to be accessed like a dictionary. It’s properties (or getter and setter functions) can be referred to by using their names as keys. All NSObject subclasses have the following functions:

func valueForKey(_ keyString) -> AnyObject?
func setValue(_ valueAnyObject?, forKey keyString)

 

(Reference to the entire protocol, which contains some other interesting functions, is here.)

Now my search controller is easy. I can simply tell the search controller all the possible searchable properties like so:

class Employee: NSObject {
    dynamic var lastName: String?
    dynamic var firstName: String?
    dynamic var title: String?
    dynamic var department: Department?
}
searchController.objects = SomeArrayOfEmployees
searchController.searchKeys = ["firstName", "lastName", "title"]

 

Now I can have a generalized search controller, that I can share between projects or provide as a framework to other developers, that doesn’t have to know anything about the Employee object ahead of time. I can describe the shape of an object using it’s string key names. Underneath the hood, my search controller can call valueForKey passing the keys as arguments, and the object will dynamically return the values of it’s properties.

Another great example of the advantages of keys is NSPredicate. NSPredicate lets you write a SQL-like query against your objects, which is harder to do without being able to refer to your object’s fields by name.

There is a catch. If you’re a strong static typing proponent, you’ll notice that none of this is statically typed. I’m able to lie about what keys an object has, as there is no way to enforce the name of a key I’m giving as a string actually exists on the object before hand. I don’t even know what the return time will be. valueForKey returns AnyObject.

Quite simply, I don’t think static typing helps this use case. I think it hurts it. I don’t see a way to make this concept workable without dropping static typing, and I think that’s ok. Dynamic typing came about because of scenarios like this. It’s ok to use dynamic typing where it works better. And all isn’t lost. When our search controller ingests this data, if it’s a Swift object, it will have to transition these objects back into a type checked environment. So even though static typing can’t cover this whole use case, it improves the reliability of using Key Value Coding by validating that the values for keys are actually the type we assumed they would be.

Key Value Paths

There are a few problems KVC hasn’t solved yet. One is the object graph problem that was talked about above. What if we want to search the name of an employee’s department? Fortunately KVC solves this for us! Keys don’t just have to be one level deep, they can be entire paths!

The KVC protocol defines the following function:

func valueForKeyPath(_ keyPathString) -> AnyObject?
keyPath
A key path of the form relationship.property (with one or more relationships); for example “department.name” or “department.manager.lastName”.

 

Hey look, that’s uhhhh, exactly our demo scenario.

So now I can traverse several levels deep in my object. I can tell my search controller, after some modification, to use a key path of “department.name” on my employee object.

searchController.objects = SomeArrayOfEmployees
searchController.searchKeyPaths = ["firstName", "lastName", "title", "department.name"]

 

Now internally, instead of calling valueForKey, my search controller just needs to call valueForKeyPath. I can use single level deep paths with valueForKeyPath with no issue, so my existing keys will work.

Notice that valueForKey and valueForKeyPath are functions that are called on your object. I’m not going to do a deep dive right now, but you could use these to implement fully dynamic values for your keys and key paths. Apple’s implementation of this function inspects your object and looks for a property or function that’s name matches the key, but there is no reason you can’t override the same function and do your own lookup on the fly. It’s useful for if your object is abstracting JSON or perhaps a SQL row.

It’s also important that this works on any NSObject. I can insert placeholder NSDictionary objects for temporary data right alongside my actual employee objects, and the same search logic will work across them. As long as the object has lastName, firstName, title, and department values, the object type no longer matters.

Key Value Observing

Well all that’s great, but we still have one more issue. We need to know when values change. Enter Key Value Observing. Key Value Observing is simple: Any time a property is called, or a setter function is called, a notification will automatically be dispatched to all interested objects. An object can signal interest in changes to a key’s value with the following function:

func addObserver(_ anObserverNSObject,
      forKeyPath keyPathString,
         options optionsNSKeyValueObservingOptions,
         context contextUnsafeMutablePointer<Void>)

 

(It’s worth checking out the other functions. They can give you finer control over sending change notifications. Also lookup the documentation for specifics on the change callback.)

Notice that the function takes a key path. An employee’s department name will not only change if their department’s name changes, but also if their department relationship changes. This covers both cases by observing any change to any object within the “department.name” path.

It’s also worth checking out the options. We can have the change callback provide both the new and old value, or even the inserted rows and removed rows of an array. Not only is this a great tool for observing changes in objects that our class doesn’t have deep knowledge of, but it’s just great in general. This sort of observing is really handy for controlling add/remove animations in collection views or table views.

In our search controller, we just need to observe all the keys we are searching in all the objects we are given, and then we can recalculate the search on an object by object basis. There are no timers running in the background, this change can fire directly from an object’s value being set.

So what’s the problem in Swift?

I’ve mentioned one problem already: Only classes that subclass NSObject can provide KVO/KVC support. Before Swift, that wasn’t a major problem. Now with Swift, we have non-NSObject subclasses, and non class types. Structs can’t support KVO/KVC in any fashion.

The properties/functions being observed also have to be dynamic. Again, not a problem in Obj-C where all functions and properties are dynamic. But not only are Swift functions not dynamic by default, some Swift types are not supported by dynamic functions. Want to observe a Swift enum type property? Can’t do that.

Even more worrisome, the open source Swift distribution could possibly not include any dynamic support, and KVO/KVC are defined as part of the Cocoa frameworks, which aren’t likely to be included with open source Swift. Any code that wants to target cross platform Swift might be forced to avoid KVO/KVC support. Ironically, just as we could be entering a golden age of framework availability with Swift, we might be discarding the technology which makes all those frameworks play cleanly with each other.

So what would I like to see from Swift?

  • Include KVO/KVC functionality as part of core Swift: The current KVO/KVC are defined as part of Foundation. They don’t need to be moved, but Swift needs an equivalent that can bridge, and is cross platform.
  • Have more dynamic functionality on by default: Another issue is that dynamic functionality is currently opt in. This is for a good reason: things like method swizzling won’t work with Swift’s static functions. But Apple could split the difference: Allow statically linked functions (and properties) to at least be looked up dynamically. This would allow functionality like KVO and KVC to work without giving up direct calling of functions or opening back up to method swizzling.
  • Have the KVC/KVO replacement work with structs and Swift types: Simple. Enums in Swift are great. Now I just want to access them with KVC and observe them.

Swift and Obj-C: 2015 Plans And Obstacles

My current plans for Swift adoption in 2015:

  • I’m just finished a contract project in Swift 1.2. It was a really great experience for working through Swift and finding it’s strengths and weaknesses.
  • At work, we may introduce some Swift 2.0 into our application for new source, but there is currently no pressing need to transition anything existing.
  • Our library we ship to other developers will continue shipping in Obj-C. There are currently no plans to port to Swift as Swift 2.0 still doesn’t meet our requirements, and a full port would be technically impossible. But we’ll be cleaning up the interface for bridging well to Swift, and hopefully beginning to distribute as a framework in our next major version.

There are reasons for these different approaches, and some things I’d still love to see from Swift to be more comfortable with it:

C++ Support

Swift still cannot directly link to C++ code. The current workaround for this is to wrap C++ code in Obj-C code. The two schools of thought on this are that Swift will never get C++ support, or C++ support may come to Swift in a future enhancement. This is a big reason a full port of our library for third party developers will not get a Swift version in it’s current form: It contains shared cross platform C++ code. If Swift does not get C++ support, than Obj-C will be used in the project indefinitely. And we’re not alone: Most major companies are in this position as well. Microsoft, Adobe and Apple are just a few companies that have large C++ code bases for cross platform projects, and it’s unlikely they’d be able to drop Obj-C as well. It’s possible to add another layer of abstraction in Swift around our Obj-C code, but that doesn’t seem like an efficient use of resources, especially when Obj-C bridging is making such large strides.

Dynamic Swift

Swift is still a mixed bag when it comes to using some of the more dynamic concepts from Obj-C. KVO can still be painful, especially when trying to mix in Swift concepts. For example, it’s not possible to declare a Swift dynamic function that is also an emum. Having to declare functions dynamic at all is also a painful design decision. It seems that it would be possible for Apple to split the difference. They could allow functions to be accessed both statically and dynamically automatically. Directly calling a function could follow the fast, static path, while dynamic observation and calls could be done through a lookup table that simply forwards to the static function. My understanding is that dynamic functions are all or nothing currently, but having a middle ground where functionality like method swizzling is just not support, or not supported well would be a good compromise.

Like C++ support, there are two different thoughts on this. One is that Swift should be totally a static language and that Apple platforms should make a huge shift away from dynamic types of programming. The other is that Swift can find some sort of middle ground, which I really hope is the course Apple takes. Swift really shines in a lot of situations that pair well with type safety, but it can really be painful to use in dynamic programming where Obj-C excels. Obj-C was born out of painful experiences with languages that were too static, and it would be a shame to backtrack on that.

Language Stability

Swift is still an unstable language, which can cause deeper problems beyond just issues with maintaining source. The Swift 2.0 ABI is still considered unstable by Apple (which I re-confirmed post WWDC), so shipping a precompiled library/framework is still not a advisable option (this is the other significant issue with shipping our library with any Swift.) An unstable ABI means that a Swift project can only link to other pre-compiled Swift code that was compiled with the same version of Xcode. This makes it difficult to service multiple customers with the latest fixes at the same time.

The unstable nature of the language itself is a smaller problem, but can still be an obstacle. If we have multiple active branches, changes in Swift can causes schisms in our repository. It also makes recompiling and resigning an older application much more difficult. We’d feel a lot more secure in our Swift investments if the language was more stable. iOS has small API changes from release to release, but Obj-C has been extremely stable.

Cocoa Support

Swift is still unwieldy under Cocoa, but Swift 2.0 and the latest release of Obj-C has made huge strides and I’m really looking forward to it. I think this will continue to improve, but parts of Cocoa that use pointers are still painful, and types like NSNumber don’t bridge to native Swift types. The one really bright spot is Swift 2.0’s formalized error handling is a great advance, and works extremely well with Cocoa. I’d like to see the dynamic support in Swift brought up to speed to better bridge with Cocoa APIs and KVO. I’d even be ok with a KVO replacement in Swift as long as it could bridge. Observing class properties is a big part of Cocoa, and I’d like it to be a big part of Swift as well. The exceptions enhancements are welcome as well. I was happy that Swift had eliminated exceptions, but I quickly ran into problems porting existing unit testing code that tested a Cocoa style API.

My Wish For Swift At WWDC: C++ Support

At work, we support a lot of platforms. We support iOS and Android, Windows, Linux, supermarket checkout scanners, Raspberry Pis, old Windows CE devices, and more. And all the devices run our same (large) core code, and all that code is written in C++. I’m not the biggest fan of C++. But there’s no doubt when we need to write something that works across a range of platforms, it’s a rich, commonly understood tool. It’s also been a massive blocker for Swift adoption for us.

For our mobile customers, we do provide both Java and Obj-C APIs. They’re both just wrappings around our C++ core, and they do the conversion from all the Obj-C or iOS native formats into the raw buffers we need to handle in our C++ core. Whenever I look at doing a Swift native SDK in the future, I’m still stuck on not having native C++ support from Swift code. In order to provide a pure native Swift API in the future, I’d have to wrap our ever growing C++ source base once in Obj-C, and then wrap it again in Swift. It just doesn’t make sense to wrap the same code twice over. Continue reading “My Wish For Swift At WWDC: C++ Support”

WWDC OS X Wish List

As WWDC fast approaches, and rumors of the next OS X update focusing on polish persist, I thought I’d go over my wish list for what I’d like to see Apple address.

Vulkan/Enhanced Graphics Support

The graphics situation on OS X is bad. 3D graphics on OS X have been rocky from the beginning (the first ever public OpenGL demo on OS X, which sadly I cannot find video of), but Apple in the past was willing to participate in a benchmark war with DirectX. At WWDC 2006, I remember Apple was still rolling out new features to try and compete with DirectX’s performance. And with 10.6 Apple introduced OpenCL which went on to become an industry standard for GPU based computation.

But recently, while Linux and Windows have had continuous cycles of improvement for 3D graphics, OS X has been seeing improvements in fits and starts. Mavericks, which moved to OpenGL 4, looked like it might be a recommitment to improving 3D graphics on OS X. But Yosemite didn’t seem to bring any real improvement to OpenGL at a time when OpenGL on the platform is already seriously lagging.

There is some speculation that Apple will bring Metal to Mac OS X, which from what I’ve heard seems like a strong possibility. Metal on Mac OS X might provide a direct route to high performance 3D and 2D graphics, but Apple might have a long road ahead in convincing developers to adopt Metal, and GPU manufacturers to write drivers for it. The driver quality on OS X for OpenGL is already so hit and miss it’s hard to believe that Apple would be able to get AMD and Nvidia to write a good driver for Apple’s proprietary Metal platform, when the OpenGL driver quality is already not that great. I have no doubt that Apple would be able to get a few software vendors on stage to demo Metal support on the Mac (Epic seems like a good possibility), but there are several third party vendors (like Adobe or Valve) that I see not being eager to have to support another standard. I could see Adobe dragging their feet for a long time on supporting Metal, or possibly not supporting it ever.

Recently the next version of OpenGL called Vulkan was announced and I think this would be a great chance for Apple to really improve cross platform graphics on OS X. Vulkan already has strong support from Valve (which is huge for a developer that used to be strongly in the DirectX camp), and is more likely to draw support from Adobe. The word is Apple is still on the fence about supporting Vulkan on the Mac. Vulkan supports the same capabilities as Metal, which puts it in an awkward position on OS X. But there will be a lot of smaller developers that won’t be able to put forth the effort to port to Metal, especially in the professional software community, so it would be foolish not to support Vulkan. Vulkan also promises a simplified driver architecture, which could be a boon for the traditionally complex development of OpenGL drivers on Mac OS X. I’d even suggest that maybe Apple should be spinning down development of Metal and really focusing on Vulkan instead, but I think the politics of Apple wouldn’t really allow for that.

At the very least, seeing support for OpenGL 4.5 in Mac OS 10.11 would be ideal, but it would be great to see Vulkan support. Metal would be an improvement as well, but I worry it would further alienate some developers of both professional software and games if only Metal were present as a modern API.

A Modern Window Server

(I don’t have visibility into the OS X window server, so this section is based on speculation on how Apple has implemented some parts of Yosemite. Please let me know if there are any corrections to be made.)

OS X’s window server is another component that had a very impressive start, looked to have a very impressive future, and then suddenly seemed to have public facing development stop while competing platforms passed it by.

The very first version of the Mac OS X window server did every bit of it’s drawing on the CPU. That made sense at the time. Great dedicated GPUs were still not common on the hardware OS X supported, and the OpenGL support required may not have been present either. But as a result, basic things like dragging windows across the screen was choppy on the first versions of Mac OS X.

In 10.2, Apple added something called Quartz Extreme. While everything inside of a window was still drawn on the CPU under Quartz Extreme, the contents of the windows themselves were stored on the GPU, meaning operations like moving a window around on the screen became very quick. There was a further project called Quartz 2D Extreme (later QuartzGL) which would have drawn the contents of the windows themselves with the GPU. This project was comparable to Windows Vista’s Windows Presentation Foundation which promised similar features to unlock fancy effects such as windows with a glass like transparency. While WPF was the source of a lot of Vista’s early performance issues, high requirements, and consumer confusion (Microsoft had two tiers of Vista certification for hardware, and one was not compatible with WPF), Microsoft’s investment in a fully GPU accelerated window server eventually paid off with a fast and responsive user interface that is capable of rendering advanced effects with ease.

Apple’s QuartzGL project eventually fizzled out due to performance issues (some interesting benchmarks are still posted here). With 10.11, it might be time for Apple to take a second look at extending QuartzGL further. Microsoft has pushed through their performance issues, and many of the visual effects Apple is trying to do with Yosemite could be speed up by GPU acceleration.The “glass” vibrancy effect OS X is using doesn’t seem to be GPU accelerated, which might be leading to some performance issues (I sometimes see vibrancy backed views “lag” behind their window being drawn which leads me to believe the vibrancy effect isn’t drawn as part of window compositing on the GPU.) GPUs are really good at these sorts of effects, and it would be great to have a window server that could do things like flag a portion of a window to the GPU as transparent, and then attach a shader to that portion of the window. As with Windows, not every element on the display would have to be drawn on the GPU. Certain views could opt in to GPU backed drawing (such as the vibrancy view), which would help the performance of views that could benefit, without hurting the performance of other views.

My understanding is that QuartzGL is still present in some form in Yosemite, and that applications can opt in to it, but it would be great to see QuartzGL taken a few steps further to allow the advanced OS X vibrancy effects to be GPU backed.

Networking and WebKit Enhancements

Yosemites’ networking woes have been well documented at this point, but I’ve also noticed a decline in Safari’s quality. HTML5 web views in particular don’t perform right, the controls always seem to be in the wrong place, the video is scaled incorrectly, and sometimes they don’t play at all. I’m having to open Chrome more and more often these days and I really wish I wouldn’t have to.

Pro Hardware

As a follow up to my post about the Mac Pro, I had one more thing that I wouldn’t mind seeing at WWDC, but it’s really a long shot. If Apple doesn’t really care to continue maintaining their pro hardware, they should really think about licensing OS X. It doesn’t have to be a license free for all. Maybe they could only license to certain HP product lines. And Apple hasn’t been against licensing in the past. Apple’s failures in markets like the server market weren’t necessarily because OS X made a poor choice for servers, but because the company didn’t seem to be willing to provide the support services required for that market. Apple’s Pro hardware may be in a similar place.

If Apple really comes out supporting Pros over the next year in a big way, I’ll gladly eat my words. But I’m becoming more and more convinced that licensing OS X would allow Apple to keep users who are happily buying Apple software (and would be paying Apple licensing fees) as an alternative to having them leave the platform entirely would be the best course of action. If Apple isn’t willing to dirty themselves with towers that can be opened and customized, or 1U rack mount servers, they could let someone else serve those markets for them.

End of Sandboxing For the Mac App Store

The Mac App Store is slowly dying, and sandboxing is a big reason. Apple either needs to fix sandboxing, or remove it as a requirement. This is again a place where Windows 10 is pulling ahead of Apple.

Real Exchange Support on OS X

Microsoft Outlook, especially the new beta version, provides a good Exchange client on Mac OS X. But I’d love to see the built in support enhanced. Mail.app still doesn’t support push email. The Exchange client in iCal still sucks (better meeting availability assistant, please.)

Conclusion

A lot of the suggestions I’m making are not user facing, but are instead foundational. OS X hasn’t had many foundational enhancements in almost 10 years (not counting security orientated things like Gatekeeper.) Microsoft has put a lot of working into their foundations that cost a lot of time and money that Apple has so far been unwilling to commit. My worry is that if Apple doesn’t concentrate on the less glamorous aspects of OS X, like graphics performance, the user experience will continue to decline. Apple is simply stacking too many new features on an aging foundation. Beyond stabilizing the features they already have, Apple really needs to look at building a foundation for the next 10 years of the Mac.

Where is the no compromise Mac Pro?

I’ve been looking at replacing my 2008 Mac Pro, and as much as I’d like to replace it with a new Mac Pro, the Mac Pro just looks so odd when you compare it to both Windows products and other Macs.

The most visible missing feature that has been widely commented on is the lack of an Apple 5k display. I don’t really want to buy a Mac Pro at the present time because I’m pretty sure it’s going to get outdated by Thunderbolt 3, and I’m not sure what the 5k support picture will look like for the current Mac Pro in the future. Continue reading “Where is the no compromise Mac Pro?”

Obj-C and Swift, not Obj-C vs. Swift

On Twitter, I’ve been vocal that I don’t believe Obj-C is going away any time soon. For reference, Apple’s stance on the matter is:

“Objective-C is not going away, both Swift and Objective-C are first class citizens for doing Cocoa and Cocoa Touch development.”

It’s important to note that while there are a lot of community websites stating that Swift is the replacement for Obj-C, that has never been any sort of official position from Apple.

This “There can only be one language” phenomenon also seems entirely unique to the Apple developer community. The Windows .NET environment is up to approximately two dozen supported languages with a common API, and it’s made that community stronger, not weaker. Yet instead of taking advantage of a position where we have two languages, each with their own unique strengths, we’re asking which one should be sent to the chopping block.

But even if it’s just my opinion it’s a good thing to have two languages around, let’s review why realistically Apple probably isn’t going to stop development on either Swift or Obj-C. Continue reading “Obj-C and Swift, not Obj-C vs. Swift”

Did 9/11 Kill Star Trek?

(There are Star Trek spoilers here.)

My girlfriend and I have been watching  Star Trek: The Next Generation in order, from the beginning. This morning we just got to “All Good Things”, the final episode of The Next Generation. I’ve spent the rest of the day thinking about why Star Trek used to be so popular as a TV series, and why it feels like it’s lost it’s way. And then I had a weird thought. Star Trek had certainly had it’s up and downs after Next Generation went off the air in 1994, but after the 90s ended, things went really badly. Star Trek Enterprise, the last TV series, was prematurely canceled. Plans for a fifth Next Generation movie fell through after the fourth, Star Trek Nemesis, was a commercial failure. And it wasn’t just that people were over Star Trek. Star Trek, with the same writers, and the same producers, had seriously declined in quality. So what happened? And then I had a totally crazy thought. Could it have been 9/11? I know, crazy. But after 2001, something about Star Trek got weird. Star Trek got unnaturally… dark.

Continue reading “Did 9/11 Kill Star Trek?”

Pushing Updates and User Expectation

In the debate on Apple quality, OS X Snow Leopard is usually cited as the high mark in recent OS X quality. Whether or not you believe that Snow Leopard had less bugs than any other release (which I tend to), another thing has changed since Snow Leopard is how Apple distributes Mac OS X. Continue reading “Pushing Updates and User Expectation”