Turmoil at OpenAI Points to Dangers of Testing Generative Artificial Intelligence ‘In the Wild’

OpenAI logo displayed on a computer screen
June 5, 2024

In a recent report published by our Center, I surveyed the risks related to generative artificial intelligence. To mitigate these risks, including the accelerated spread of biased content and political misinformation, I recommended that Microsoft and its development partner OpenAI move more cautiously to test their new products in the lab before releasing them to the public:

In an ideal world, Microsoft and its rivals would slow down and rethink the stuff being ground into the AI sausage. Products whose risks cannot be thoroughly mitigated would be pulled from the market. Planned future offerings would remain in the lab until they are fully tested and made safe.

Instead of this cautious approach, however, some of the companies leading the charge are insisting that testing should take place “in the wild,” Silicon Valley-speak that refers to evaluating product performance as consumers use it. “You can’t build the perfect product in a lab,” Yusuf Mehdi, a Microsoft vice president who heads consumer marketing, told Axios in February 2023. “You have to get it out and test it with people.”

Now, current and former safety employees at OpenAI, the market leader in generative AI, have issued a public warning that the company is engendering the opposite corporate culture from what is needed. The New York Times reports:

A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness and secrecy at the San Francisco artificial intelligence company, which is racing to build the most powerful A.I. systems ever created.

The group, which includes nine current and former OpenAI employees, has rallied in recent days around shared concerns that the company has not done enough to prevent its A.I. systems from becoming dangerous.

I highly recommend that you read all of the Times dispatch by the paper’s able tech columnist Kevin Roose. If that unnerves you, as it should, proceed directly to our report for the full story. Then press your member of Congress to push for more systematic regulation of digital industries, including generative AI.

We will continue to follow this and related developments in the technology industry as they affect democracy and society generally.

Related

See all