Meta has paused all contracts with Mercor, a $10 billion AI data startup, after a cyberattack compromised the company’s systems. The breach was traced to a supply chain attack on LiteLLM, an open-source Python library used by millions of developers to connect applications to AI services. Mercor provides vetted training data to Meta, OpenAI and Anthropic, making the breach a cross-industry concern.
A hacking group known as TeamPCP compromised LiteLLM’s CI/CD pipeline on March 27, 2026. The group used stolen credentials from a LiteLLM maintainer to publish two malicious package versions, 1.82.7 and 1.82.8, to PyPI, the Python package repository. The tainted packages were live for approximately 40 minutes before being identified and removed. Version 1.82.7 embedded base64-encoded malware directly into the library’s proxy server code, executing on import. Mercor confirmed it was “one of thousands of companies” affected by the compromise.
The extortion group Lapsus$ then claimed responsibility for the Mercor breach and began publishing stolen data on its leak site. The published samples included Slack data, internal ticketing information and two videos showing conversations between Mercor’s AI systems and contractors. Lapsus$ claims to have obtained four terabytes of data in total, including platform source code and database records. A class-action lawsuit filed on April 1 alleges Mercor failed to maintain adequate cybersecurity protections, leaving more than 40,000 people exposed to identity theft and fraud.
Meta suspended all work with Mercor pending investigation. The company has not confirmed whether its own user data or AI training methodologies were exposed. Meta’s AI infrastructure spending is projected between $115 billion and $135 billion for 2026, making its training pipeline one of its most sensitive assets. OpenAI said it is investigating the incident but has not paused its projects with Mercor. Anthropic has not commented publicly.
Over the years, we have seen that when competing AI companies rely on the same third-party data supplier, a single attack exposes all of them at once. It goes to show how quickly a compromised open-source dependency, in this case one with 97 million monthly downloads, spreads damage across the AI supply chain. AI startups would have to arm themselves better against the very tools that define them used against them.

