From tomorrow, the UK government is hosting the first global AI Safety Summit, bringing together about 100 people from industry and government to develop a shared understanding of the emerging risks of leading-edge AI while unlocking its benefits.
The event will be held at Bletchley Park, a site in Milton Keynes that became the home of code breakers during World War II and saw the development of Colossus, the world’s first programmable digital electronic computer, used to decrypt the Nazi Party’s Enigma code, shortening the war by at least two years.
“AI will bring new knowledge, new opportunities for economic growth, new advances in human capability, and the chance to solve problems we once thought beyond us. But it also brings new dangers and new fears,” said UK Prime Minister Rishi Sunak in a speech last week, adding that one the aims of the summit will be an attempted agreement on the first ever international statement about the nature of the risks posed by AI.
Is the summit’s agenda sufficient?
In September, the UK government released an agenda ahead of the summit, which included the development of a shared understanding of the risks posed by frontier AI, alongside calls for a process of international collaboration on AI safety, including how best to support national and international frameworks.
These talking points were reinforced by a discussion paper that was published by the government last week, due to be distributed to attendees of the summit with the aim of informing discussions.
“The UK wants to be seen as an innovation hub and [AI technologies are] clearly going to be a massive area of growth and development, both for the economy and the workforce,” said Philip Blows, CEO of StreaksAI, a UK-based developer of AI technology.
However, while the general consensus seems to be in favor of an event where the risks of the technology are discussed, the format of the AI Safety Summit has faced some criticism. While some high profile attendees have been announced, such as US Vice President Kamala Harris, conformation of the full guest list has not yet been made public.
Who gets to sit at the table and make decisions about the most important safety issues and potential harms is really critical, said Michael Bak, executive director of the Forum on Information and Democracy.
“If that’s a close-knit group of people, dominated by the private sector… that would concern me,” Bak said. “My desire would be that there would be recognition of the value that civil society brings to the table, in addition to the benefit of technologists who are developing these products for private interests.”
Hosting an AI Safety Summit is a “positive first step” as it means governments are “acknowledging that there are risks attached to this technology,” said Shweta Singh, assistant professor at the University of Warwick whose research includes ethical and responsible AI.
There’s a concern, however, that…
2023-11-04 02:41:02
Post from www.computerworld.com rnrn