UK regulator highlights principles for AI foundation model, cautions against potential harm

UK regulator highlights principles for AI foundation model, cautions against potential harm

The⁣ UK’s ⁤Competition‍ and Markets Authority (CMA) has warned ⁣about the potential risks of⁤ artificial intelligence in its⁤ newly‌ published review into‍ AI foundation models.

Foundation​ models are AI systems that have been ‌trained on massive, ⁤unlabeled data sets. They⁢ underpin large ‍language⁢ models — like OpenAI’s GPT-4 and⁤ Google’s ⁣PaLM — for generative ⁤AI applications like ChatGPT,‍ and can be used for a wide range of tasks, such ⁤as translating text and analyzing⁤ medical images.

The ‌new report proposes a number of principles to guide the ongoing development and use of foundation models, drawing on input‍ from 70 stakeholders, including‍ a range of developers, businesses, consumer ⁢and ⁣industry ​organizations, academics, and publicly available information.

The ‌proposed ⁢principles are:

Accountability:⁣ AI foundation model developers and⁣ deployers are accountable for outputs provided to ​consumers.
Access: Ongoing ‍ready‍ access to key inputs, without unnecessary restrictions.
Diversity: Sustained diversity of‍ business models, including ⁣both open and closed.
Choice:⁢ Sufficient choice for businesses so they can decide​ how to use⁢ foundation models.
Flexibility: Having the ​flexibility to switch and/or use multiple⁣ foundation models according ⁣to need.
Fair ⁤dealing: No anticompetitive conduct including self-preferencing, tying or bundling.
Transparency: Consumers and businesses⁣ are given information ⁣about the ​risks and limitations of foundation model-generated content so they ‍can make informed ‍choices.
Poorly developed AI models⁤ could lead to societal harm

While the ‍CMA report highlights how people and businesses⁤ stand to benefit​ from correctly‍ implemented and well⁢ developed foundation models, it cautioned that if competition ​is weak or AI developers fail to comply with consumer protection laws, it  could ‍lead to societal harm.⁤ Examples‍ given include citizens being exposed to “significant ⁣levels” of ‌false and misleading‍ information and AI-enabled fraud.

The⁤ CMA also warned that in the longer term, market dominance ⁤from a small number of firms could ‍lead to ​anticompetition concerns, with established players ⁤using ⁤foundation ​models to entrench their position and deliver overpriced or poor‍ quality ​products and services.

“The speed at⁤ which AI is becoming part of everyday life⁣ for people and businesses is dramatic. There is real potential ‌for this technology to turbo ⁣charge productivity and make‍ millions of everyday tasks easier – but we can’t take a positive future for granted,”⁣ said⁣ Sarah Cardell, CEO of the CMA, in comments ‌posted alongside the report.

“There remains a real risk that the use of AI ‌develops in ​a ⁢way that undermines ‌consumer trust or is dominated by a few players who exert market power that prevents ⁣the full⁢ benefits being felt across the economy.”

The CMA said that as part of its program of ⁣engagement,⁣ it would continue to speak to a‌ wide range of interested parties, including consumer…

2023-09-18 15:24:03
Original from www.computerworld.com rnrn

Exit mobile version