Rapid advances in AI technology are prompting a transatlantic response.
The U.S. and the EU are working on a voluntary code of conduct to which it wants AI companies to adhere. With enforceable EU legislation still years away, policymakers hope such a code might forestall the potential doom scenarios brought about by AI that various experts have warned about.
At the EU-U.S. Trade and Technology Council (TTC) in Luleå, Sweden, EU tech chief Margrethe Vestager said on Wednesday, May 31st, that she believed a draft for such a code of conduct could be ready within weeks, allowing industry to commit to a final proposal “very, very soon.”
So-called content-creating ‘generative AI’, of which ChatGPT is the most popular example, is of particular concern to lawmakers. “Everyone knows this is the next powerful thing,” said Vestager, who called it “a complete game-changer.”
Tech companies have been coming out with ever more sophisticated AI platforms, a trend spearheaded by OpenAI’s ChatGPT, into which Microsoft has poured billions. Competition in the field is however heating up, as Google has presented its own PaLM 2.
Since any legislation would “take effect in two and a half to three years time” in the most optimistic scenario, the Danish commissioner told reporters that that is “obviously way too late,” and that “we [i.e. the EU and the U.S.] need to act now.”
In a later tweet, she mentioned watermarking and external audits as potential ideas to be incorporated into the code. Watermarking would entail embedding digital marks that would protect data sets from unauthorized use, granting owners of certain data more say in who is allowed to train AI models using their data.
After talks with EU officials, U.S. Secretary of State Antony Blinken said they all felt the “fierce urgency” to act in light of the technology which was emerging. “There is almost always a gap when new technologies emerge,” Blinken said, with “the time it takes for governments and institutions to figure out how to legislate or regulate.”
In the meantime, the voluntary code would hopefully plug that gap. Blinken told reporters it “would be open to all like-minded countries.”
The TTC joint statement said expert groups had been created that would focus on terms required to assess AI risks, cooperation on AI standards, and monitoring existing and emerging risks.
Ahead of legislation, the EU is already in talks with specific tech companies to get a grip on their AI ventures.
As reported by Reuters, the EU’s industry chief Thierry Breton is to meet with OpenAI CEO Sam Altman in June to discuss how his company will implement the bloc’s rules on AI. Previously, Altman threatened to leave the EU if its AI laws become too hard to comply with.
The EU commissioner had already met with Google and parent company Alphabet CEO Sundar Pichai, who proved more receptive to the notion of having guardrails in place. After their meeting, Breton said that he and Sundar agreed that they “cannot afford to wait until AI regulation actually becomes applicable, and to work together with all AI developers to already develop an AI pact on a voluntary basis ahead of the legal deadline.”