The world wants to regulate AI, but does not quite know how
The venue will be picturesque: a 19th-century pile north of London that during the second world war was home to Alan Turing, his code-breaking crew and the first programmable digital computer. The attendees will be an elite bunch of 100 world leaders and tech executives. And the question they will strive to answer is epochal: how to ensure that artificial intelligence neither becomes a tool of unchecked malfeasance nor turns against humanity.
The “AI Safety Summit”, which the British government is hosting on November 1st and 2nd at Bletchley Park, appears destined for the history books. And it may indeed one day be seen as the first time global power-brokers sat down to discuss seriously what to do about a technology that may change the world. As Jonathan Black, one of the organisers, observed, in contrast to other big policy debates, such as climate change, “there is a lot of good will” but “we still don’t know what the right answer is.”
Efforts to rein in AI abound. Negotiations in Brussels entered a pivotal stage on October 25th as officials grappled to finalise the European Union’s ambitious AI act by the end of the year. In the days leading up to Britain’s summit or shortly thereafter, the White House is expected to issue an executive order on AI. The G7 club of rich democracies will this autumn start drafting a code of conduct for AI firms. China, for its part, on October 18th unveiled a “Global AI Governance Initiative”.
2023-10-24 14:12:21
Source from www.economist.com
rnrn