A former Google employee has filed a confidential complaint with the U.S. Securities and Exchange Commission (SEC), alleging the tech giant breached its own ethics rules. The whistleblower claims Google assisted an Israeli military contractor in applying AI to drone surveillance footage in 2024.
According to the complaint, reviewed by multiple international news outlets, this cooperation contradicted Google’s public policies at the time. Consequently, the whistleblower argues that Google misled investors and regulators.
The allegations centre on events from July 2024. Internal documents included in the complaint show Google’s cloud-computing division received a support request from an Israel Defense Forces (IDF) email address. The requester’s name matched a publicly listed employee of CloudEx, an Israeli tech firm and alleged IDF contractor.
The CloudEx employee reported a bug in Google’s Gemini AI. The software was failing to reliably identify objects such as soldiers, drones, and armoured vehicles in aerial video footage.
Google staff reportedly responded with suggestions and conducted internal tests to resolve the issue. Furthermore, a Google staffer assigned to the IDF’s Google Cloud account was copied on the correspondence. After several exchanges, the CloudEx employee confirmed the issue had resolved itself.
While the complaint alleges this footage is related to Israel’s operations in Gaza, the documents do not cite specific evidence for this claim.
The whistleblower contends this technical assistance violated Google’s 2024 AI Principles. At that time, the company publicly stated it would not deploy AI for weapons or surveillance that violates “internationally accepted norms”. The former employee stated anonymously:
Many of my projects at Google have gone through their internal AI ethics review process… But when it came to Israel and Gaza, the opposite was true.
Notably, Google updated its AI policies in February 2025, months after this incident. The update removed pledges barring the use of AI for weapons or surveillance. Google stated this shift was necessary to help democratically elected governments maintain global AI dominance.
A Google spokesperson rejected the allegations. The company argues the support did not violate its principles because the usage was not “meaningful”. The spokesperson said:
The ticket originated from an account with less than a couple hundred dollars of monthly spend on AI products.
They added that support staff provided standard, help-desk information for a generally available product and did not offer “further technical assistance”.
Google maintains its work under the $1.2 billion “Project Nimbus” contract with Israel is not directed at classified or military workloads relevant to weapons.
This report follows a pattern of tech giants facing scrutiny over the Israel-Gaza war. In August 2025, Microsoft opened an inquiry after reports surfaced that its cloud services were used for mass surveillance in Gaza and the West Bank. Microsoft subsequently cut off specific access for a unit inside Israel’s Ministry of Defense.
Meanwhile, the conflict continues. Israel reports approximately 1,200 deaths from the October 7, 2023, attacks. The Gaza Health Ministry reports over 71,000 Palestinian deaths.
Despite internal protests and previous firings of dissenting employees, tech integration continues. In December 2024, the Pentagon announced Google’s Gemini as the first AI offering on its new GenAI.mil platform.