This week, during a presentation, Apple’s CEO Tim Cook unveiled an impactful collaboration with OpenAI to incorporate its robust artificial intelligence model into the Siri voice assistant ecosystem.
In the detailed specifications revealed in a technical document by Apple post-event, it is evident that Google, under Alphabet, has emerged as a significant contributor to Apple’s AI advancement efforts.
For constructing Apple’s foundational AI models, the engineers leveraged their proprietary framework in conjunction with various hardware components. Notably, they extensively utilized the on-premises graphics processing units (GPUs) alongside Google’s cloud-exclusive tensor processing units (TPUs).
Over approximately a decade, Google has been developing and refining TPUs. They have openly shared information about two variants of their fifth-generation chips, optimized for AI training. According to Google, the fifth-generation performance edition presents competitive performance akin to Nvidia H100 AI chips.
During its annual developers’ gathering, Google disclosed plans to introduce a sixth generation of TPUs later this year.
The TPUs are purpose-built to execute AI tasks efficiently and train models effectively. Google has orchestrated a comprehensive cloud computing infrastructure around these processors and the accompanying software stack.
Both Apple and Google have not yet responded to inquiries seeking comments on this collaboration.
Apple refrained from specifying the extent of their utilization of Google’s chips and software in comparison to those from Nvidia or other AI hardware providers.
Typically, accessing and using Google’s specialized chips necessitates purchasing through their cloud division, akin to how customers procure computing resources from AWS or Azure.
