Safeguarding AI: Addressing the Risks of Generative Artificial Intelligence

Safeguarding AI
June 2023

A new report from the NYU Stern Center for Business and Human Rights argues that the best way to prepare for potential existential risks in the future is to begin now to regulate the AI harms right in front of us.

Some of the largest tech companies, including Microsoft, Google, and Meta, and start-ups, such as OpenAI, Anthropic, and Stability AI, are moving quickly to introduce generative AI products in what is widely referred to as an AI “arms race.” But the technology creates risks of more convincing disinformation campaigns, easier-to-launch cyberattacks, individualized digital fraud, privacy violations, amplified bias and hate speech, rampant falsehoods known as “hallucination,” and further deterioration of the struggling news business.

In this report, we explore these issues and include various recommendations for both industry and government.

Generative AI ReportDownload

Related

See all
cover of Michael Posner's book, Conscience Incorporated on top of a blue background
Conscience Incorporated

In his new book Conscience Incorporated, Michael Posner, director of the Center for Business and Human Rights, offers practical strategies and bold reforms to help businesses align profitability with ethical responsibility.