With almost 1 billion generative AI (genAI) equipped smartphones set to ship between now and 2027, according to Counterpoint, it’s increasingly likely that Apple will be in the mix with edge-based Apple GPT inside its phones.
The company has been criticized for appearing to be a latecomer to the genAI party. Arguably, this is true, with even Microsoft Copilot (and built-in ChatGPT) now available as an iPhone app.
Deliberate, intentional … and a bit slow
Apple has commented on the technology, pointing out that it already incorporates a lot of machine intelligence within its devices and explaining plans to expand the AI within its products on a “deliberate” basis. The implication is that any large-scale deployment of such profound technology should be purpose-driven to avoid unexpected consequences.
With those statements designed to buy it some time, the company is quietly investing billions in R&D around the technology — including AI deals with news publishers.
It has held an internal AI summit and is alleged to be aiming to deliver a much smarter, much more AI-driven Siri along with the strategic inclusion of genAI properties across its apps, all within an internal project dubbed “Ajax.”
R&D on the fast track
The company seems to be making progress. According to The Information’s Jeff Pu, Apple aims to bring this smarter Siri to market toward the end of the year — just in time to take a slice of the market growth Counterpoint envisions. (It now predicts about 100 million smartphones with on-device genAI will ship this year.)
The problem with genAI is that it is server-based and typically requires significant memory and data space to run. Think of it this way: Today, if you use Microsoft Copilot on your iPhone to run a GenAI request, the task is offloaded to a server for the actual work, and the response is returned to the device.
That’s not ideal for three key reasons:
Privacy, security, and data protection.
The need to be online throughout the process.
The excessive costs in energy and water consumption at the server level.
Apple’s focus on privacy, security, and the environment means the company surely wants to be able to run requests natively on the edge device, no server required.
What Apple has done
Apple’s R&D teams have taken a significant step toward that, announcing a major breakthrough that promises to enable iPhones and other Apple devices to successfully run computationally and memory-intensive LLM (large language models) on the device itself.
“Our work not only provides a solution to a current computational bottleneck, but also sets a precedent for future research,” the researchers said. “We believe as LLMs continue to grow in size and complexity, approaches like this work will be essential for harnessing their full potential in a wide range of devices and applications.”
It feels like internal development is accelerating.
Apple’s machine learning (ML) teams also recently released a new ML framework for Apple Silicon: MLX,…
2024-01-03 02:00:04
Link from www.computerworld.com rnrn