Power rather than raw compute is rapidly becoming the defining constraint on the next phase of artificial intelligence infrastructure, and investors are starting to treat energy efficiency as a strategic differentiator rather than a secondary concern. That shift is evident in Peak XV Partners leading a $15 million Series A round in C2i Semiconductors, an Indian startup focused on cutting electricity losses inside large scale AI data centers.
Founded in 2024 by former power systems engineers from Texas Instruments, C2i which stands for control, conversion and intelligence is building a plug and play “grid to GPU” power delivery architecture. Instead of relying on today’s fragmented power stacks where electricity is stepped down repeatedly across thousands of components, C2i is attempting to redesign the entire path from a data center’s electrical bus to the processor.
According to co founder and CTO Preetam Tadeparthy, traditional systems waste roughly 15 to 20 percent of energy during conversion, while C2i’s unified approach aims to claw back around 10 percent of that loss. At hyperscale, that translates to saving roughly 100 kilowatts for every megawatt consumed.
The round, which also includes participation from Yali Deeptech and TDK Ventures, brings C2i’s total funding to $19 million and reflects a broader reassessment across the tech industry. Compute can be added, but power availability increasingly cannot. Data center energy demand is projected to nearly triple by 2035, according to BloombergNEF, while Goldman Sachs estimates global data center power consumption could rise 175 percent by 2030 compared to 2023 levels. In many regions, grid capacity, permitting timelines and energy costs are now gating AI expansion.
Peak XV managing director Rajan Anandan has argued that once facilities and hardware are deployed, electricity becomes the dominant long term cost driver. Even single digit efficiency gains can compound into tens of billions of dollars in savings when applied across hyperscale fleets, which is why infrastructure level innovations are attracting renewed investor attention.
Beyond cost, C2i’s approach could also ease thermal pressure. As AI racks push beyond 600 kilowatts per rack, excess heat becomes harder and more expensive to remove. Reducing power loss upstream lowers cooling requirements and helps extend the usable life of existing facilities, a growing concern as operators struggle to retrofit older data centers for AI workloads.
C2i expects its first two silicon designs to return from fabrication between April and June 2026, after which it plans to begin validation with data center operators and hyperscale customers. With teams now forming in the United States and Taiwan, the startup is positioning itself for early deployment at a moment when the industry is confronting a clear reality: future AI scale will be dictated as much by electrons as by algorithms.