Request a call

Google to Work with Military but Pledges not to build AI Weapons

Last updated on June 8, 2018 by Dotsquares

Following the controversy with Google’s involvement with the United States Department of Defense, Pentagon’s project Maven, Google has finally released the document ‘Artificial Intelligence at Google: our principles’ after the pre-announcement last month.

The principles in the document clarify that the company will not work on creating AI weapons that could “directly facilitate injury to people”, or even AI-based surveillance projects that go against the “internationally accepted norms” or the “principles of international law and human rights”.

It is noteworthy that only recently Google has also released a statement clearing that it will not renew its contract over project Maven, on its expiration date in March next year. The company was compelled to take these actions after thousands of the employees signed a petition against the project.

Ever since Google announced its partnership with the Pentagon over project Maven, that involves the use of AI for object identification in low-res drone videos, it has become a source of worry for many of the critics who raised speculations against the negative potential of AI in the military.

Furthermore, some reports state that the project itself was quite low-scale and was, in fact, a try-out for the top AI developers including Google, Microsoft, IBM, and Amazon, to win a more rewarding contract that’s estimated to be worth $10 billion.

So even though the company has denied its participation with Project Maven, it will, as per the report published by The Verge, continue to compete for the other parts of the contract as long as the task involved is in line with the principles they have already announced.

Further Implications of the AI principles

AI and automated systems have always remained a hotly debated topic among the industry and tech leaders. Everyone agrees that the frontier ahead of us and what the incredible potential of AI presents, makes it necessary to formulate certain ethics around the R&D of the technology.

In the blog accompanied with the principles’ document, CEO Sundar Pichai also brought light on the issue by writing that, “We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right.”

It is clear from the principles, that it is indeed quite tricky to create specific rules in the area that cover as many downbeat possibilities, whilst still remaining flexible enough to include the positive utilities. So while these rules/principles were created to deal with one specific scenario, the day of their announcement can be considered as yet another historical event for the 4th Industrial revolution, that marks the beginning of a more systematic and reliable evolution of AI and machine learning.

Resources

https://ai.google/principles

https://gizmodo.com/google-is-helping-the-pentagon-build-ai-for-drones-1823464533

https://www.theverge.com/2018/6/7/17439310/google-ai-ethics-principles-warfare-weapons-military-project-maven

https://www.wired.com/story/google-sets-limits-on-its-use-of-ai-but-allows-defense-work/

http://indianexpress.com/article/technology/tech-news-technology/google-staff-ai-revolt-jeopardizes-pentagon-cloud-deals-5204980/