When observing all the mega-hits that Apple has brought to the market the past 40 years, there is one consistent theme. Apple tries to do the things that are considered hard or even impossible at that time.
With the original Mac, they created a GUI-only computer that had a mere 128K bytes of memory. With the iPod, they synced 1,000 tunes (5GB’s worth) to your PC in an age where the predominant I/O (USB 1) was woefully inadequate (and tiny hard drives had just become available). With the iPhone, they shrunk a full blown PC into the size of a chocolate bar. With Mac OS X, they implemented a radically new graphical rendering system (Quartz Compositor) that taxed memory and CPU power and was unbearably slow on the hardware at the time, which only became usable years later with powerful new GPUs (MacOS X 10.2).
In all these cases, Apple was not shy to do something that most people at that time considered very difficult, if not impossible. Sometimes even Apple failed to do it well enough, and suffered the consequences of an inadequate product (low early Mac sales, super slow MacOS X 10.0, 10.1). But in the end, that is why they managed to differentiate, because others had not even started.
Apple’s approach to privacy can be seen in the same way. Whereas the common narrative was that you needed huge servers and massive data sets for good photo recognition, Apple has implemented machine learning on a smartphone that fits into your pocket. Of course they may be taking shortcuts, but so did the Mac 128K. What is important is that they took the challenge while everybody else was doing machine learning the old way (on powerful servers with less regard for privacy). Similarly, Apple has implemented a differential privacy approach which still has no guarantee of success. Even experts in the field are split and some say that the privacy trade-offs between machine learning effectiveness might result in a product that won’t work. Apple made the bet nonetheless. Apple chose to take the hard, possibly impossible way, by hobbling itself with the self-imposed shackle that is a privacy focus. They have thought different.
The simple reason why Apple’s approach has worked even once, is Moore’s law. Moore’s law is the central source of rapid technical progress and disruption, and it makes what is impossible today into something easy to achieve tomorrow.
No one who has seen the progress of silicon would doubt that Moore’s law will eventually make the processing tasks done exclusively on high power servers today, possible on the smartphones of tomorrow. We should also consider that the amount of data collected from smart devices must be growing even faster than Moore’s law (thanks to the shrinking size and ubiquity made possible by Moore’s law in the first place). Tomorrow, we will have many times more data than we collect today, and it is totally possible that the sheer vastness of data will make it possible to infer meaningful conclusions from differential privacy data, even when anonymised under very stringent noise levels.
Therefore, I predict that even though Apple’s approach to privacy may lead to a worse experience for the next couple of years, as Moore’s law kicks in, the difference will end up being negligible. By the time the general public become acutely aware for the need for privacy, Apple will have a powerful solution that in terms of user experience is just as good as Google’s.
The boldness to go all-in on a technology that just barely works, based on the hope that Moore’s law will save them in the next couple of years, is a defining feature of Apple’s hugely successful innovations. This is a formula that has worked for them time and time again.
This is what I see in Apple’s current privacy approach, and this is why I find it so typically and belovingly Apple.