A recent policy update from Google reveals it is siding with AI for surveillance and weapons smartly disguised between thick lines of text. According to the folks at Wired, Google updated its AI governing principles early on Tuesday by removing the phrase “technologies that cause or are likely to cause overall harm” possibly hinting it supports the narrative now disguising as a democratic use of AI on a broader scale.
Google Updates AI Governing Policies Siding With Surveillance and Weapons
Google made pledges shielding the use of AI for weapons and surveillance purposes back in 2018. However, a change as reported by The Washington Post saw changes in the pledges where Google removed all the clauses and statements it made earlier. They said the company wouldn’t entertain any designing or deploying of AI tools for the use of surveillance technology or weapons.
Tech juggernaut Google removed the phrases like “technologies that cause or are likely to cause overall harm” and “technologies that gather or use the information for surveillance violating internationally accepted norms” as well as “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”.
The new AI governing policies removed the section titled “Applications we will not pursue” and now, it has been changed to “Responsible development and deployment”. The search engine giant says it will enforce human oversight, feedback mechanisms, and due diligence, on users’ goals and responsibilities using AI and would abide by principles of international law and human rights.
Google’s AI Policies Are More Ambiguous Now Than Ever
For the unversed, this is a broader commitment that explicitly discounts mentions of its stand against the use of AI for surveillance and weapons. Google further justifies its policy changes stating it would develop tech that would violate “internationally accepted norms” without actually specifying it giving users a much broader spectrum of operations they could run safely tucked inside the fine lines of the policy.
According to DeepMind CEO Demis Hassabis and Senior VP of Researcher, labs, technology, and Society James Manyika, via a Google Blog post, the change of policy was necessitated due to the fact that AI has emerged as a “general-purpose technology“.
Related
He further added that Google believes in democracies around AI development that are guided by the core values of human rights, equality, and freedom. This prompts governments and companies to work together and create AI applications that protect people, support national security, and global growth.
It remains to be seen what the repercussions of these changes in AI governing policies have on people across the globe. In my personal opinion, this is an anti move against what Google believed just a couple of weeks ago (and not even years ago).