AI data centres are wasting significant amounts of electricity because they cannot manage the rapid power surges that occur when thousands of GPUs switch between computation and communication tasks. A new startup called Niv-AI has emerged from stealth with $12 million in seed funding to fix the problem.
When frontier AI labs operate thousands of GPUs in concert to train and run large models, the processors create frequent, millisecond-scale demand spikes as they shift between tasks. These surges make it difficult for data centers to manage how much power they draw from the electrical grid. To avoid running short, operators either pay for temporary energy storage to cover the spikes or throttle their GPU usage by as much as 30%. Both options reduce the return on investments in chips that can cost tens of thousands of dollars each.
“There is so much power squandered in these AI factories,” NVIDIA CEO Jensen Huang said during a keynote at the company’s annual GTC conference. “Every unused watt is revenue lost.”
Niv-AI, founded last year by CEO Tomer Timor and CTO Edward Kiznis in Tel Aviv, is deploying rack-level sensors that detect GPU power usage at the millisecond level. The goal is to understand the specific power profiles of different deep learning tasks and develop techniques that allow data centres to use more of the capacity they are already paying for. The company is backed by Glilot Capital, Grove Ventures, Arc VC, Encoded VC, Leap Forward, and Aurora Capital Partners.
The founders plan to build an AI model on the data they collect, training it to predict and synchronise power loads across a data centre. The envisioned product is essentially a copilot for data centre engineers, sitting as an intelligence layer between the facility and the electrical grid.
Timor described it as a two-sided problem. On one side, data centres need to utilize more GPUs and extract more value from the power they already pay for. On the other, they need to present more predictable and responsible power profiles to grid operators who are wary of the unpredictable consumption patterns AI facilities create.
Niv-AI expects to have an operational system running in a handful of US data centers within six to eight months. The timing is significant. Hyperscalers are racing to build new data center capacity but face land-use disputes, supply chain delays, and grid connection bottlenecks. A tool that unlocks more performance from existing facilities without requiring additional power infrastructure could prove extremely valuable in the interim.
“We just can’t continue building data centres the way we build them now,” said Lior Handlesman, a partner at Grove Ventures and a Niv-AI board member.

