Connect with us

Tech

Apple admits to using Google Tensor hardware to train Apple Intelligence

Published

on

Apple admits to using Google Tensor hardware to train Apple Intelligence




New artificial intelligence research published by Apple reveals that Apple has been using Google hardware to build the early foundations of Apple Intelligence.

The research paper, called “Apple Intelligence Foundation Language Models” is pretty technical, and details the already-known sources of the language model at the core of the company’s new technology. However, a quote buried inside the paper hints that Apple may have been using Google hardware in early development.

In the paper, it says that Apple’s Foundation Model (AFM) and the server technology that drives it, were initially built on “v4 and v5p Cloud TPU clusters” using Apple software. There is a great deal of information in the research about how that’s done, and what data sources they used to train.

A CNBC report on Monday suggests that Apple rented time on existing Google-hosted clusters, but the research doesn’t directly support that, nor say anything about Google or Nvidia at all. What’s more likely is that Apple bought the hardware outright from the company, and used it within its own data centers.

The model’s initial training being performed on Google-designed hardware ultimately doesn’t mean much in the long-run. Apple has been said to have its own hardware derived from Apple Silicon in its data centers to process Apple Intelligence queries.

Said to be called “Project ACDC,” Apple is reportedly planning to optimize AI applications within its data centers.

Apple is significantly increasing its investment in the artificial intelligence sector, planning to allocate over $5 billion to AI server enhancements over the next two years. The company aims to match the technological capabilities of industry leaders like Microsoft and Meta by acquiring tens of thousands of AI server units, likely driven by Project ACDC.

Apple has also acquired firms in Canada and France that both work on compressing data used in AI queries to data centers.

Continue Reading