Safeguarding AI: Addressing the Risks of Generative Artificial Intelligence
June 2023
A new report from the NYU Stern Center for Business and Human Rights argues that the best way to prepare for potential existential risks in the future is to begin now to regulate the AI harms right in front of us.
Some of the largest tech companies, including Microsoft, Google, and Meta, and start-ups, such as OpenAI, Anthropic, and Stability AI, are moving quickly to introduce generative AI products in what is widely referred to as an AI “arms race.” But the technology creates risks of more convincing disinformation campaigns, easier-to-launch cyberattacks, individualized digital fraud, privacy violations, amplified bias and hate speech, rampant falsehoods known as “hallucination,” and further deterioration of the struggling news business.
In this report, we explore these issues and include various recommendations for both industry and government.
Related
See allDigital Aftershocks: Online Mobilization and Violence in the United States
Our new report draws on open-source intelligence to trace how extremist actors coordinate across online platforms to justify violence and recruit supporters, offering a framework for policy and platform response.
Feedback on the European Commission’s Digital Fairness Act
The Working Group on Gaming and Regulation submitted feedback to the European Commission’s Digital Fairness Act, calling for clearer, better-enforced rules across Member States that close regulatory gaps without adding unnecessary complexity to the EU’s digital framework.
Feedback on the EU’s Consumer Agenda 2025–2030
The Working Group on Gaming and Regulation submitted feedback to the European Commission’s Consumer Agenda 2025–2030, urging the EU to strengthen enforcement against manipulative design practices in digital games and to modernize consumer protection rules for the digital marketplace.
Technology & Democracy

