Safeguarding AI: Addressing the Risks of Generative Artificial Intelligence
June 2023
A new report from the NYU Stern Center for Business and Human Rights argues that the best way to prepare for potential existential risks in the future is to begin now to regulate the AI harms right in front of us.
Some of the largest tech companies, including Microsoft, Google, and Meta, and start-ups, such as OpenAI, Anthropic, and Stability AI, are moving quickly to introduce generative AI products in what is widely referred to as an AI “arms race.” But the technology creates risks of more convincing disinformation campaigns, easier-to-launch cyberattacks, individualized digital fraud, privacy violations, amplified bias and hate speech, rampant falsehoods known as “hallucination,” and further deterioration of the struggling news business.
In this report, we explore these issues and include various recommendations for both industry and government.
Related
See allDigital Risks to the 2024 Elections: Safeguarding Democracy in the Era of Disinformation
A new report by Paul M. Barrett, Justin Hendrix and Cecely Richard-Carvajal highlights that this year's primary tech-related threat to elections isn't AI-generated content, but the spread of false, hateful, and violent content on social media platforms.
Reality Check: How to Protect Human Rights in the 3D Immersive Web
Our report written by Mariana Olaizola Rosenblat explains the risks to privacy and safety exacerbated by immersive technologies and recommends steps tech companies and the government can take to minimize those risks.
Gaming The System: How Extremists Exploit Gaming Sites And What Can Be Done To Counter Them
Our report describes how extremist actors are exploiting online gaming sites to disseminate violent ideologies, network with like-minded people, and perpetrate real-world harm.