Connect with us

Tech

Apple’s Secret AI Sauce Apparently Skips NVIDIA GPUs & Uses Google’s Chips For Training 

Published

on

Apple’s Secret AI Sauce Apparently Skips NVIDIA GPUs & Uses Google’s Chips For Training 

This is not investment advice. The author has no position in any of the stocks mentioned. Wccftech.com has a disclosure and ethics policy.

In a fresh research paper that details its AI training capabilities for the iPhone and other products artificial intelligence features announced this year, Cupertino tech giant Apple, it seems, has chosen to rely on Google’s chips instead of market leader NVIDIA’s. NVIDIA’s rise to the top of the market capitalization food chain is based on the strong demand for its GPUs which have pushed revenue and earnings higher by triple digit percentages.

However, in its paper, Apple shares that its 2.73 billion parameter Apple Foundation Model (AFM) relies on v4 and v5p tensor processing unit (TPU) cloud clusters typically provided by Alphabet Inc’s Google.

Apple’s AI Approach Relies On Using TPUs Instead of GPUs Shows Research Paper

Apple’s research paper, released earlier today, covers its training infrastructure and other details for the AI models that will power features announced at the WWDC earlier this year. Apple announced both on device AI processing and cloud processing, and at the heart of these AI features is the Apple Foundation Model dubbed AFM.

For AFM on server, or the model that will power the cloud AI features called Apple Cloud Compute, Apple shared that it trains a 6.3 trillion token AI model “from scratch” on “8192 TPUv4 chips.” Google’s TPUv4 chips are available in pods made of 4096 chips each.

Apple added that the AFM models (both on device and cloud) are trained on TPUv4 chips and v5p Cloud TPU clusters. v5p is part of Google’s Cloud AI ‘Hypercomputer,’ and it was announced in December last year.

Each v5p pod is made of 8,960 chips each, and according to Google, it offers twice the floating point operations per second (FLOPS) and three times the memory over TPU v4 to train models nearly three times faster.

Apple’s description of its on device and cloud AI features. Both rely on the AFM, or AFM derived training techniques. Image: Apple Intelligence Foundation Language Models/Apple

For the on device AI model for features such as writing and image selection, Apple uses a 6.4 billion parameter model that is “trained from scratch using the same recipe as AFM-server.” Apple also chose to rely on the older v4 TPU chips for the AFM server model. As highlighted above, it used 8092 v4 TPU chips, but for the on device AFM model, the firm chose to rely on the newer chips. This model, according to Apple, was trained on 2048 TPU v5p chips.

Other details shared in the paper include evaluating the model for harmful responses, sensitive topics, factual correctness, math performance and human satisfaction with model outputs. According to Apple, AFM server and on device models lead their industry counterparts regarding the suppression of harmful outputs.

For instance, AFM server, when compared to OpenAI’s GPT-4 had a harmful output violation rate of 6.3%, which is significantly lower than GPT-4’s 28.8%, suggests Apple’s data. Similarly, on device AFM’s 7.5% violation rate was lower than Llama-3-8B (trained by Facebook’s parent Meta)’s score of 21.8%.

For email, message and notification summarization, AFM on device had a satisfaction percentage of 71.3%, 63% and 74.9%, respectively. The research paper shared that these led the Llama, Gemma and Phi-3 models.

Share this story

Facebook

Twitter

Continue Reading