Is Apple Heading Towards Siri 2.0 at WWDC 2024?

Is Apple Heading Towards Siri 2.0 at WWDC 2024?

With Apple spending a lot of money⁤ on generative AI and machine learning models, is it time‍ for us to ‌start prepping for Siri 2.0?

The Information ​says Apple has “significantly” ⁣increased spending on AI ⁣development focused on genAI ⁢capabilities within Siri. The report suggests Apple’s internal⁢ AI research thrust ⁢is pushing‌ in three key directions:

Building⁢ “Ajax,” its own proprietary⁤ Large Language Model (LLM). This has ​already been trained on ⁣more than 200 billion parameters, which could make it more powerful than Chat GPT 3.5 when it appeared. Apple is reportedly spending millions a day on Ajax.
Apple continues to develop machine image intelligence, which extends to⁤ image and video generation and the creation of 3D scenes.
A ‍third group works on multimodal AI, handling text,​ images, and video. I expect this includes features such as text ‌scanning in images and door recognition.
On the road to Siri⁤ 2.0

It feels as though Apple ⁣may be ‍slightly‌ stung by criticism of its AI achievements so far. With this in mind, it wants to:

Improve Siri’s conversational abilities.
Develop useful​ assistant features that rely ⁣on AI.
Introduce ⁤support for complex tasks within Siri, such as image or text recognition, scene generation and so⁤ forth.
And potentially ⁣make ​it possible to use voice to create Shortcuts functions.

While it’s true that ChatGPT caught most ⁤everyone ‌by surprise, Apple seemed most left behind once that chatbot appeared. The iPhone maker appears to have put these developments on the fast track and may even have these features ready to roll within ⁤iOS⁣ 18, the report claimed. Work is being led by a new 16-member team of engineers building the “Foundational Model” LLM the company will use⁢ to ‌build models, at a ⁢cost of millions of dollars each day.

Good ​foundations

Building powerful LLM-based models in Siri may be complicated a little by the company’s⁣ dedication to customer privacy. That​ implies that whatever models it does deploy will primarily use features that already exist on its devices. That’s where better integration with Shortcuts makes sense, though⁢ the company may not be completely reliant on ‌that. Why? Because every Apple chip also carries a Neural Engine — a dedicated space on the chip to handle machine intelligence tasks.

The problem is that Apple’s existing LLMs are quite large, which means they would be difficult to carry and run on the device.⁢ That ⁤limitation suggests the company might develop highly focused automations that can work well on⁣ device in certain domains, and used‍ in conjunction with cloud-based systems for more complex tasks;​ this might undermine Apple’s environmental work, given the energy and ⁤water such⁣ machines ⁣devour.

Making things that are actually ‍useful

Will Apple’s teams be able to usefully figure out how to intelligently ⁢harness the user behavior data each device holds while keeping it  private? Is ‌it even possible to build ⁤AI models ‍that ⁣can…

2023-09-12 08:48:03
Source from www.computerworld.com ‌ rnrn

Exit mobile version