Safeguarding AI: Addressing the Risks of Generative Artificial Intelligence
June 2023
A new report from the NYU Stern Center for Business and Human Rights argues that the best way to prepare for potential existential risks in the future is to begin now to regulate the AI harms right in front of us.
Some of the largest tech companies, including Microsoft, Google, and Meta, and start-ups, such as OpenAI, Anthropic, and Stability AI, are moving quickly to introduce generative AI products in what is widely referred to as an AI “arms race.” But the technology creates risks of more convincing disinformation campaigns, easier-to-launch cyberattacks, individualized digital fraud, privacy violations, amplified bias and hate speech, rampant falsehoods known as “hallucination,” and further deterioration of the struggling news business.
In this report, we explore these issues and include various recommendations for both industry and government.
Related
See allConscience Incorporated
In his new book Conscience Incorporated, Michael Posner, director of the Center for Business and Human Rights, offers practical strategies and bold reforms to help businesses align profitability with ethical responsibility.
Setting Higher Standards: How Governments Can Regulate Corporate Human Rights Performance
Our report, released three months after the landmark CSDDD entered into force, provides a roadmap for regulators and companies navigating a new era of corporate human rights responsibility.
Covert Campaigns: Safeguarding Encrypted Messaging Platforms from Voter Manipulation
Our new report on encrypted messaging platforms reveals how political propagandists are exploiting these tools to manipulate voters globally, while offering recommendations for platforms, policymakers, and researchers to mitigate these threats without undermining end-to-end encryption.