Whistleblower Claims Google Violated AI Policy on Military Use

close up shot of a person using a laptop

Summary

A whistleblower alleges Google violated its AI policies by assisting an Israeli military contractor with drone footage analysis in 2024.

Why this matters

The complaint raises questions about how major U.S. tech companies apply ethical guidelines on AI use, especially in conflict zones. It also highlights broader concerns over transparency and corporate accountability in defense-related applications.

A former Google employee has alleged in a confidential federal whistleblower complaint that the company violated its artificial intelligence policies in 2024 by supporting an Israeli military contractor analyzing drone footage, according to internal documents reviewed by The Washington Post.

The complaint, filed with the Securities and Exchange Commission (SEC) in August, claims Google’s Gemini AI technology was used by Israel’s defense forces despite the company’s public AI principles prohibiting applications related to weapons or surveillance that breach international norms.

According to the documents, Google’s cloud-computing unit received a support request in July 2024 from an email address associated with the Israel Defense Forces (IDF). The request, attributed to a listed employee of Israeli tech firm CloudEx—alleged in the complaint to be an IDF contractor—sought help improving object recognition of drones, vehicles, and soldiers in aerial video. Google employees responded with support and ran internal tests, the documents show.

The whistleblower said the assistance contradicted Google’s stated AI principles. “Many of my projects at Google have gone through their internal AI ethics review process,” the complainant said in a statement provided anonymously to The Post. “But when it came to Israel and Gaza, the opposite was true.”

The complaint also alleges that Google misled investors and regulators by failing to adhere to its public policies, which had been included in federal filings, thereby violating U.S. securities law.

A Google spokesperson disputed the characterization, saying the assistance was limited to routine customer support and did not constitute a violation. “We answered a general use question… with standard, help desk information, and did not provide any further technical assistance,” the company said. The spokesperson added the account linked to the request spent less than a few hundred dollars monthly on AI tools, limiting its ability to use the technology in a “meaningful” way.

Google’s cloud video intelligence documentation states that video object tracking is free for the first 1,000 minutes, then costs 15 cents per minute.

The SEC declined to comment. Filing a whistleblower complaint does not automatically trigger an investigation, and the agency does not make complaints public.

Representatives for the IDF and CloudEx did not respond to requests for comment.

CloudEx sponsored a 2024 tech event near Tel Aviv that featured Israeli military officials discussing cloud computing’s role in Gaza operations.

The complaint alleges the Gemini AI use was tied to Israeli actions in Gaza, though it does not provide direct evidence. In prior statements, Google said its work with Israel’s government did not involve classified, military, or intelligence-related workloads.

Google adopted its AI policy in 2018 after employee opposition to Project Maven, a Pentagon drone analysis program. The policy barred applications tied to weapons or surveillance. However, in early 2023, Google updated the policy, removing the ban and stating its intent to support democratic governments in global AI development.

Google and Amazon in 2021 were awarded a $1.2 billion cloud services contract with Israel’s government known as Project Nimbus. Microsoft also has similar contracts. Employee protests followed, with Google firing more than 50 workers in April 2024 after office sit-ins opposing the company’s work with Israel. Microsoft has also dismissed protesting staff.

In December 2024, the Pentagon named Google’s Gemini as the first AI tool approved for Defense Department use under its GenAI.mil platform.

Public concern over U.S. tech firms’ involvement in military applications has intensified amid the Israel-Gaza conflict. Israel reported that about 1,200 people, mostly civilians, were killed in Hamas’s Oct. 7, 2023, attacks. The Gaza Health Ministry reports more than 71,000 Palestinians have died in the conflict.

The Post previously reported that Google accelerated AI resource access to the Israeli military after October 7. An internal memo indicated a Google employee warned colleagues that delays in processing Israeli Defense Ministry requests could drive the ministry to seek help from Amazon instead.

In August, Microsoft opened an internal investigation after the Guardian reported its services may have supported mass surveillance of civilians in Gaza and the West Bank. The company later confirmed it restricted some Israeli Defense Ministry access in response.

Get Camp Lejeune & New River Updates

Essential base alerts, local events, and military news delivered to your inbox

We don’t spam! Read our privacy policy for more info.