Microsoft Admits Issuing AI & Cloud Support for Israeli Military
Microsoft has publicly confirmed it supplied AI and cloud services to the Israeli military during the Israel–Hamas war, primarily to support efforts such as locating hostages.
Microsoft confirmed earlier this week that it provided the Israeli military with advanced AI models and Azure cloud computing services during the Gaza conflict. These services included tools designed for hostage-rescue operations.
Microsoft Admits Issuing AI Tech to Israel
The admission is the result of an investigation conducted by the Associated Press, which revealed a nearly 200-fold increase in the military’s utilization of commercial AI products following Hamas’s October 7, 2023 attacks.
However, internal and external reviews found no evidence that Microsoft’s technology was directly used to target or harm Palestinian civilians in Gaza. In the light of this, Microsoft concedes it has “limited visibility” into on‑premises deployments and cannot fully audit how customers use its products once deployed.
This is not the first time tech giants have come under scrutiny for allegedly siding with perpetrators. Google also faced protests over Project Nimbus after supplying cloud services to Israel in April.
Microsoft As An Organization Regarding Israel
Voices inside of the Microsoft are not all praising Israel: in early 2025, Microsoft employees formed the “No Azure for Apartheid” group to protest contracts with the Israeli Ministry of Defense. Moreover, this group led to high‑profile walkouts and the firing of some participants under Microsoft’s conduct policies.
Activists criticized Microsoft’s assertions as insufficient, pointing to reports that Israeli forces employ AI‑assisted targeting systems like “The Gospel” and “Lavender,” to disproportionate civilian casualties.
Microsoft has claimed it conducts customer due‑diligence checks and periodic compliance audits, although details of these processes remain undisclosed.
Response and Condemnation
United Nations Secretary-General António Guterres raised worry about AI-assisted targeting in Gaza, noting that automated systems may violate international humanitarian law’s principles of distinction and proportionality.
Human rights organizations have advocated for a moratorium on the deployment of lethal AI and for increased transparency from technology providers. These endeavors underscore the necessity of corporate accountability, which should encompass more than just policy statements; it should also encompass enforceable technical safeguards.

Abdul Wasay explores emerging trends across AI, cybersecurity, startups and social media platforms in a way anyone can easily follow.