Why Smaller AI Language Models Could Be Better

Why Smaller AI Language Models Could Be Better

Large language models (LLMs) often appear to ‌be in a fight to claim the title of ‌largest and most powerful, but many organizations eyeing their use are ⁢beginning to realize big isn’t always better.

The adoption of generative⁢ artificial intelligence (genAI) ⁢tools is on a steep incline. Organizations plan to⁢ invest 10% to 15% more on‌ AI initiatives over the next ⁣year and a⁢ half compared to calendar year 2022, according ⁣to an IDC survey of more than 2,000 IT and line-of-business decision makers.

And genAI is already having a significant impact on businesses and organizations ‌across‍ industries. Early adopters claim a ‍35% increase in innovation ‌and a 33% rise in sustainability because of AI investments over the past three years, IDC found.

Customer and employee retention has also improved by 32%. “AI will be just as crucial as the cloud in providing customers with a genuine competitive advantage over the next five to 10 years,” said Ritu ‌Jyoti, a ⁤group vice‌ president for AI & Automation Research at IDC. “Organizations that can be visionary will have a huge competitive edge.”

IDC

While general purpose LLMs with hundreds of billions or even a trillion parameters might sound powerful, they’re​ also devouring compute cycles faster than the chips they require can be manufactured or upscaled; that can strain server capacity and ⁢lead to an unrealistically long time to train models for a particular business use.

“Sooner or later, scaling‌ of GPU chips will fail to keep ‍up with increases in model size,” said Avivah Litan, a vice president distinguished analyst with Gartner Research. “So, ‌continuing to make ⁤models bigger and bigger is not a viable option.”

Dan Diasio, Ernst & Young’s Global Artificial Intelligence Consulting Leader, agreed, adding that there’s currently a backlog of GPU orders. A ​chip‍ shortage not only creates problems for tech firms making LLMs, but​ also for user companies seeking to tweak models or build their own proprietary LLMs.

“As ​a result, ​the costs of fine-tuning and building a specialized corporate LLM are quite high, thus driving the trend ⁣towards knowledge enhancement packs and building libraries‌ of prompts that contain specialized knowledge,” Diasio said.

Additionally, ⁢smaller domain specific models trained on more ⁣data will‌ eventually challenge the dominance of today’s leading LLMs, ⁣such as OpenAI’s GPT 4, Meta ⁣AI’s LLaMA 2, ‌or Google’s PaLM 2.

Smaller models would also be easier to train for specific use cases.

LLMs of all sizes⁢ are trained through​ a process known as prompt engineering ⁢— feeding queries and the ⁣correct responses into the‍ models so the algorithm can respond more accurately. Today,‍ there are even marketplaces ​for lists of prompts, such as the 100 best prompts for ChatGPT.

But the more data‍ ingested ​into LLMs, the the greater the possibility of bad and inaccurate outputs. GenAI ⁣tools are basically​ next-word predictors, meaning⁤ flawed ‍information fed…

2023-09-16 09:00:03
Source from www.computerworld.com rnrn

Exit mobile version