How a Pentagon Contract Became an Identity Crisis for Google
Fei-Fei Li is among the brightest stars in the burgeoning field of artificial intelligence, somehow managing to hold down two demanding jobs simultaneously: head of Stanford University’s A.I. lab and chief scientist for A.I. at Google Cloud, one of the search giant’s most promising enterprises.
Yet last September, when nervous company officials discussed how to speak publicly about Google’s first major A.I. contract with the Pentagon, Dr. Li strongly advised shunning those two potent letters.
“Avoid at ALL COSTS any mention or implication of AI,” she wrote in an email to colleagues reviewed by The New York Times. “Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google.”
Dr. Li’s concern about the implications of military contracts for Google has proved prescient. The company’s relationship with the Defense Department since it won a share of the contract for the Maven program, which uses artificial intelligence to interpret video images and could be used to improve the targeting of drone strikes, has touched off an existential crisis, according to emails and documents reviewed by The Times as well as interviews with about a dozen current and former Google employees.
It has fractured Google’s work force, fueled heated staff meetings and internal exchanges, and prompted some employees to resign. The dispute has caused grief for some senior Google officials, including Dr. Li, as they try to straddle the gap between scientists with deep moral objections and salespeople salivating over defense contracts.
The advertising model behind Google’s spectacular growth has provoked criticism that it invades web users’ privacy and supports dubious websites, including those peddling false news. Now the company’s path to future growth, via cloud-computing services, has divided the company over its stand on weaponry. To proceed with big defense contracts could drive away brainy experts in artificial intelligence; to reject such work would deprive it of a potentially huge business.
The internal debate over Maven, viewed by both supporters and opponents as opening the door to much bigger defense contracts, generated a petition signed by about 4,000 employees who demanded “a clear policy stating that neither Google nor its contractors will ever build warfare technology.”
Executives at DeepMind, an A.I. pioneer based in London that Google acquired in 2014, have said they are completely opposed to military and surveillance work, and employees at the lab have protested the contract. The acquisition agreement between the two companies said DeepMind technology would never be used for military or surveillance purposes.
About a dozen Google employees have resigned over the issue, which was first reported by Gizmodo. One departing engineer petitioned to rename a conference room after Clara Immerwahr, a German chemist who killed herself in 1915 after protesting the use of science in warfare. And “Do the Right Thing” stickers have appeared in Google’s New York City offices, according to company emails viewed by The Times.
EDITORS’ PICKS
She Found Comfort in a Brooklyn Diner, Then Lost Everything
What Drove a Man to Set Himself on Fire in Brooklyn?
On Social Media’s Fringes, Extremism Targets Women
Those emails and other internal documents, shared by an employee who opposes Pentagon contracts, show that at least some Google executives anticipated the dissent and negative publicity. But other employees, noting that rivals like Microsoft and Amazon were enthusiastically pursuing lucrative Pentagon work, concluded that such projects were crucial to the company’s growth and nothing to be ashamed of.
https:// www.nytimes.com/2018/05/30/technology/google-project-maven-pentagon.html