Google Joins Other Tech Giants In Retreating From Human Rights Commitments

other_google_ai
February 6, 2025

Google has revised its Responsible AI Principles in a way that signals its weakening commitment to respecting human rights. The previous version of the principles stated that Google would not design or deploy “technologies whose purpose contravenes widely accepted principles of international law and human rights,” including technologies used for weaponry or surveillance violating internationally accepted norms. This clear and bold commitment was replaced by an almost meaningless reference to human rights in the new policy, which states that the company will pursue due diligence so as to “align with user goals, social responsibility, and widely accepted principles of international law and human rights.”

For some, this change may seem cosmetic. But it actually represents a significant erosion of Google’s commitment to ensure that it is not complicit in violations of human rights and humanitarian law.

AI-powered surveillance technologies are at the center of human rights violations around the world. Facial recognition tools are used in China’s Xinjian province to monitor, profile, and control Uyghur Muslims in real time, facilitating the government’s persecution of that minority group. Similarly, Russian authorities have used AI facial recognition to identify and arrest anti-government protesters.

Accordingly, the deployment of AI for surveillance has been deemed among the highest-risk use cases under the EU’s AI Act and therefore subject to the law’s stringent restrictions. Some applications of AI — namely, the use of live facial recognition and other real-time tracking in publicly accessible places — are banned, with very narrow exceptions. Other applications, such as the use of AI in ex-post facial recognition, predictive policing systems, or forensic analysis, are classified as “high risk” and trigger strict obligations including human oversight and technical documentation.

The AI Act excludes military applications from its scope, but any use of force is subject to widely accepted rules of humanitarian law. These include the principles of distinction (the ability to distinguish at all times between civilians and combatants) and proportionality (the requirement to avoid undue collateral damage). The complexity, unreliability, and data-driven nature of currently available AI models raise serious questions about their ability to conform with these principles.

While it is understandable that Google wants to stay competitive in the global AI race, the company should not simply jettison a prudent commitment without acknowledging the risks. To maintain its credibility as a leader in responsible AI, Google needs to justify this change in policy and outline in detail which safeguards it will adopt to ensure its compliance with human rights and humanitarian principles.

Related

See all