Safeguarding AI: Addressing the Risks of Generative Artificial Intelligence

Safeguarding AI
June 2023

A new report from the NYU Stern Center for Business and Human Rights argues that the best way to prepare for potential existential risks in the future is to begin now to regulate the AI harms right in front of us.

Some of the largest tech companies, including Microsoft, Google, and Meta, and start-ups, such as OpenAI, Anthropic, and Stability AI, are moving quickly to introduce generative AI products in what is widely referred to as an AI “arms race.” But the technology creates risks of more convincing disinformation campaigns, easier-to-launch cyberattacks, individualized digital fraud, privacy violations, amplified bias and hate speech, rampant falsehoods known as “hallucination,” and further deterioration of the struggling news business.

In this report, we explore these issues and include various recommendations for both industry and government.

Related

See all
WG EU submission
Feedback on the EU’s Consumer Agenda 2025–2030

The Working Group on Gaming and Regulation submitted feedback to the European Commission’s Consumer Agenda 2025–2030, urging the EU to strengthen enforcement against manipulative design practices in digital games and to modernize consumer protection rules for the digital marketplace.