Seven AI companies agree to safeguards in the US
Spread the love

The White House has announced that seven leading companies in artificial intelligence have vowed to manage the risks associated with the technology.

As part of this effort, AI will be tested for security, and the results of these tests will be made available.

The announcement was made by representatives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.

A number of warnings have been issued about the technology’s capabilities.

In the run-up to the 2024 presidential election, companies are developing their tools at a rapid pace, raising concerns about disinformation.

On Friday, President Biden said, “We must be clear-eyed and vigilant about the threats emerging from emerging technologies that can threaten our democracy and our values.”

Facebook’s parent company, Meta, announced on Wednesday its own AI tool, dubbed Llama 2.

Meta’s president of global affairs, Sir Nick Clegg, told the BBC that the “hype has somewhat outpaced the technology”.

In the agreement signed on Friday, the companies agreed to:

  • Their AI systems are tested by internal and external experts for security before being released.

  • By implementing watermarks, people will be able to detect artificial intelligence.

  • Reporting regularly on the capabilities and limitations of artificial intelligence.

  • Identifying risks such as bias, discrimination, and invasion of privacy.

It should be easy for people to tell when online content is created by artificial intelligence, according to the White House.

He said, “This is a serious responsibility, we have to do it right.” He added, “And there is enormous potential upside.”

On a visit to San Francisco in June, EU commissioner Thierry Breton discussed watermarks for AI-generated content with OpenAI chief executive Sam Altman.

“Looking forward to continuing our discussions – particularly on watermarking,” Breton wrote in a tweet that included a video clip.

US regulators are taking a step toward more robust regulation of AI with the voluntary safeguards signed on Friday.

In a statement, the administration said it is also working on an executive order.

In addition, the White House said it would work with allies to establish an international framework to govern the development and use of artificial intelligence.

Apocalyptic warnings about the technology have included that it could generate misinformation and destabilise society, even posing an existential threat to humanity – although some ground-breaking computer scientists have questioned such scenarios.