Seven leading artificial intelligence firms will debut new voluntary safeguards designed to minimize abuse of and bias within the emerging technology at an event Friday at the White House.
(Bloomberg) — Seven leading artificial intelligence firms will debut new voluntary safeguards designed to minimize abuse of and bias within the emerging technology at an event Friday at the White House.
President Joe Biden will be joined by executives from Amazon.com Inc., Alphabet Inc., Meta Platforms Inc., Microsoft Corp., and OpenAI, who are among the firms committing to a transparency and security pledge.
Under the agreement, companies will put new artificial intelligence systems through internal and external testing before their release and ask outside teams to probe their systems for security flaws, discriminatory tendencies or risks to Americans’ rights, health information or safety.
The firms, including Anthropic and Inflection AI, are also making new commitments to share information to improve risk mitigation with governments, civil society, and academics – and report vulnerabilities as they emerge. And leading AI companies will incorporate virtual watermarks into the material they generate, offering a way to help distinguish real images and video from those created by computers.
The package formalizes and expands some of the steps already underway at major AI firms, who have seen immense public interest in their emerging technology – matched only by concern over the corresponding societal risks.
Nick Clegg, the president of global affairs at Meta, said the voluntary commitments were an “important first step in ensuring responsible guardrails are established for AI and they create a model for other governments to follow.”
“AI should benefit the whole of society. For that to happen, these powerful new technologies need to be built and deployed responsibly,” he said in a statement released early Friday.
White House aides say the pledge helps balance the promise of artificial technology against the risks, and is the result of months of intensive behind-the-scenes lobbying. Many of the executives expected at the White House on Friday attended a meeting with Biden and Vice President Kamala Harris in May, where the administration warned the industry it was responsible for ensuring the safety of its technology.
“We’ve got to make sure that the companies are pressure testing their products as they develop them and certainly before they release them, to make sure that they don’t have unintended consequences, like being vulnerable to cyberattacks or being used to discriminate against certain people,” White House Chief of Staff Jeff Zients said in an interview. “And the important thing — and you’ll see this throughout all the work — is they can’t grade their own homework here.”
Voluntary Safeguards
Still, the fact the commitments are voluntary illustrates the limits of what Biden’s administration can do to steer the most advanced AI models away from potential misuse.
The guidelines don’t prescribe approval from specific outside experts in order to release technologies, and companies are only required to report – rather than eliminate – risks like possible inappropriate use or bias. The watermarking system still needs to be developed, and it may prove difficult to stamp content in a way that couldn’t be easily removed by malignant actors seeking to sow disinformation on the internet.
And there are few mechanisms beyond public opinion to compel commitments to use the technologies for societal priorities like medicine and climate change.
“It’s a moving target,” Zients said. “So we not only have to execute and implement on these commitments, but we’ve got to figure out the next round of commitments as the technologies change.“
Zients and other administration officials also say it will be difficult to keep pace with emerging technologies without congressional legislation that both help the government impose stricter rules and dedicate funding that will allow them to hire experts and regulators.
Aides describe concern over artificial intelligence as a top priority of the president in recent months. Biden frequently brings the topic up in meetings with economic, national security, and health advisers, and has had conversations with Cabinet secretaries telling them to prioritize examining how the technology might intersect with their agencies.
In conversations with outside experts, Biden was warned that algorithmic social media – like Meta’s Facebook and Instagram and ByteDance Ltd.’s TikTok – has already illustrated some of the risks that artificial intelligence could pose. One outside adviser suggested the president should consider the issue akin to cloning in the 1990s, needing clear principles and guardrails.
The White House said it consulted with the governments of 20 countries before Friday’s announcement.
“I think all sides were willing or eager to move as quickly as possible on this because that’s how AI works — you can’t sleep on this technology,” said Deputy Chief of Staff Bruce Reed.
All of these efforts, however, lag behind the pace of AI developments spurred by intense competition among corporate rivals and by the fear that Chinese innovation could overtake Western advances.
Even in Europe, where the EU’s AI Act is far ahead of anything passed by the US Congress, leaders have recognized the need for voluntary commitments from companies before binding law is in place. One White House official estimated it could be at least two years before European regulations began impacting AI firms.
That’s left officials there also asking companies to police themselves. In meetings with tech executives over the past three months, Thierry Breton, the European Union’s internal market commissioner, has called on AI developers to agree to an “AI Pact” to set some non-binding guardrails.
Regulate AI? Here’s What That Might Mean in the US: QuickTake
–With assistance from Jennifer Jacobs.
(Adds Meta comment from 6th paragraph)
More stories like this are available on bloomberg.com
©2023 Bloomberg L.P.