Safeguarding AI: Addressing the Risks of Generative Artificial Intelligence
June 2023
A new report from the NYU Stern Center for Business and Human Rights argues that the best way to prepare for potential existential risks in the future is to begin now to regulate the AI harms right in front of us.
Some of the largest tech companies, including Microsoft, Google, and Meta, and start-ups, such as OpenAI, Anthropic, and Stability AI, are moving quickly to introduce generative AI products in what is widely referred to as an AI “arms race.” But the technology creates risks of more convincing disinformation campaigns, easier-to-launch cyberattacks, individualized digital fraud, privacy violations, amplified bias and hate speech, rampant falsehoods known as “hallucination,” and further deterioration of the struggling news business.
In this report, we explore these issues and include various recommendations for both industry and government.
Related
See all‘We Want You To Be A Proud Boy’ How Social Media Facilitates Political Intimidation and Violence
As a volatile election nears, our new report reveals that social media is consistently exploited to facilitate political intimidation and violence, and recommends crucial changes that social media companies and governments can implement to reduce these harms.
Digital Risks to the 2024 Elections: Safeguarding Democracy in the Era of Disinformation
A new report by Paul M. Barrett, Justin Hendrix and Cecely Richard-Carvajal highlights that this year's primary tech-related threat to elections isn't AI-generated content, but the spread of false, hateful, and violent content on social media platforms.
NetChoice Amicus Brief
In this brief, the Center urged the Supreme Court not to grant the social media industry full immunity from regulation, while also arguing that content moderation laws in Florida and Texas violate the First Amendment.