Apple Puts a ‘Neural Engine’ Inside the iPhone
Deep learning is integrated into Apple’s new iPhone X in a big way. In fact, the company has embedded deep learning capabilities right into the hardware with its new A11 Bionic chip, a CPU-GPU combo processor that Apple calls a “neural engine.”
Apple’s new A11 Bionic chip sports a six-core CPU processor that delivers up to 70% greater performance compared to the previous A10 chip while delivering better battery lift. It also sports a new three-core GPU that’s attached to the CPUs.
Together, the A11’s CPUs and the GPUs are used to power a range of compelling new deep learning, augmented reality, and 3D gaming capabilities from a device a little bigger than a deck of cards (it does cost around $1,000, however).
The new chip, along with a new TrueDepth camera that includes a dot projector and an infrared sensor, allow Apple to utilize facial recognition technology in a whole new way. In addition to using facial recognition to unlock the phone, the tech giant is allowing software developers to use it for user authentication and to authorize payments on its Apple Pay network.
Here’s what Apple says about its new Face ID system:
“Face ID projects more than 30,000 invisible IR dots. The IR image and dot pattern are pushed through neural networks to create a mathematical model of your face and send the data to the secure enclave to confirm a match, while adapting to physical changes in appearance over time.”
All saved facial information is protected, the company says, and all processing is done on-device; no data or processing touches the cloud to ensure user privacy. “Face ID only unlocks iPhone X when customers look at it and is designed to prevent spoofing by photos or masks,” the company says.
The iPhone X’s camera and the A11 Bionic CPU are also used for “world tracking and scene recognition,” Apple says. With its capability to analyze more than 50 different facial movements, the combination of the A11 Bionic also powers a new face-swapping feature allows users to push their facial expressions to a dozen “Animojis,” including a panda, a unicorn, and a robot.
Neural networking is a form of machine learning that has become popular recently at large hyperscale outfits like Facebook, Google, and Microsoft, which have used them primarily for training image detection and speech/text recognition models.
Apple — which acquired machine learning software company Turi last year and has made several other acquisitions in the space — is clearly relying on the A11 Bionic processor and associated software for image recognition, but it’s not clear if the tech giant will use it for other things. However, during its launch event Tuesday, the company did indicate that it could use the A11 to power “other features.”
This is not the first time that Apple has offered machine learning capabilities in the iPhone. In June, the company unveiled Core ML, a new service that enables software developers to incorporate Apple’s machine learning technology into the iOS applications they develop for iPhones and iPads without requiring them to develop their own data science expertise.
As we told you about last month, the deep learning-powered service makes it easy for developers to include code for detecting human faces, landmarks, and text into their iOS apps with just a few lines of code. Like the new A11 neural engine, the Core ML service utilizes the CPU and GPU capacity of the iOS devices themselves.
Related Items:
Is Your Smartphone Spying On You?
AI to Surpass Human Perception in 5 to 10 Years, Zuckerberg Says
How Motorola Uses Big Data Analytics to Improve Its Smartphones