David Paul Morris | Bloomberg | Getty Images
Tim Cook, chief executive officer of Apple Inc., speaks during the Apple Worldwide Developers Conference (WWDC) in San Jose, California, U.S., on Monday, June 5, 2017.
Apple on Monday introduced Core ML, a set of tools that developers can use to incorporate machine learning techniques into their apps. The news was little noticed amid all the new hardware and iOS updates. But it’s important.
The capabilities of Core ML are a “dead giveaway” that Apple is preparing to introduce a new kind of processor, presumably just for iPhones at first, that could make trendy machine learning workloads run more efficiently. That’s according to one person inhabiting the world of machine learning, Reza Zadeh, CEO of image recognition startup Matroid.
A company would only release something like Core ML “if there’s a really intense piece of hardware that it was going do compile down into,” said Zadeh, who built the machine learning algorithms for Twitter’s Who to Follow feature before starting Matroid last year.
“All those converters and everything, it’s a dead giveaway there’s going to be some intense [processor] available down the line.”
Alphabet‘s Google introduced its TensorFlow open-source framework for machine learning in November 2015, and then unveiled a custom tensor processing unit, or TPU, six months later. Zadeh thinks Apple is doing something similar with the launch of Core ML. Meanwhile startups like Deep Vision and Mythic have been working on processors that could be plunked into smaller devices, as opposed to data centers, where Google’s TPUs are located. Another startup, Movidius, which touts its vision processing unit for drones and other gadgets, was acquired by Intel in 2016.
Bloomberg reported last month on Apple’s plans to build a artificial intelligence (AI) chip. This would build on the inclusion of a field-programmable gate array (FPGA) for certain computations in the iPhone 7 last year.
Core ML provides machine learning models that developers can drop right into iOS, macOS and tvOS apps, while also providing ways to use their own models. Apps drawing on Core ML will be able to recognize objects in images, follow the movement of things in video frames or just make smart calculations given a certain amount of previous data.
These processes can run on current iPhones and iPads. But it’s reasonable to think Apple wants to improve performance on future hardware, while also lengthening battery life. It could do that sending machine learning workloads to a dedicated AI chip, instead of devices’ central processing units (CPUs) and graphics processing units (GPUs), which are not always optimized for those computing duties, Zadeh said.
But that’s not all.
Emerging specialized processors like the rumored Apple AI chip could become the computational workhorse in the next few decades, Zadeh said.