A silicon photonics startup backed by Bill Gates’s Gates Frontier fund has revealed optical transistor technology that is 10,000 times smaller than conventional electronic transistors. This technology comes alongside optical chips capable of executing large matrix computations, which is a fundamental requirement of modern AI models.
The company, Neurophos, aims to harness the speed and efficiency of photonic computing using light (photons) instead of electricity (electrons). The aim is to overcome longstanding limits of conventional semiconductor scaling and energy consumption.
In recent testing, its optical processing units (OPUs) are reported to significantly outperform traditional silicon GPUs on matrix workloads that are central to neural network training and inference, which is what is making them the talk of the town.
“On chip, there is a single photonic sensor that is 1,000 by 1,000 in size,” Neurophos CEO Patrick Bowen told the publication. This is about 15 times larger than the usual 256 x 256 matrix used in most AI GPUs. Despite that, the company was able to make its optical transistor around 10,000 times smaller than what’s currently available. “The equivalent of the optical transistor that you get from Silicon Photonics factories today is massive. It’s like 2 mm long,” Bowen added. “You just can’t fit enough of them on a chip in order to get a compute density that remotely competes with digital CMOS today.”
Neurophos’s optical transistors, implemented with nanostructured metasurfaces, reduce the footprint of computing elements while enabling thousands of parallel operations per chip. The startup also recently closed a major $110 million Series A funding round led by Gates Frontier, with participation from Microsoft’s M12, Aramco Ventures, and others, giving investor confidence in photonics as a next-gen compute architecture.
Here is why photonics fundamentally differ from traditional chips: Light carries information at the speed of light with lower heat generation and energy loss. That enables high-bandwidth, low-latency data movement. With electrons, that is a key bottleneck. Silicon photonic integrated circuits are already used in data-center interconnects and telecommunications, but moving them into core compute roles for AI and HPC workloads represents a leap forward.
The company’s first-generation accelerator is set to feature “the optical equivalent” of a single tensor core, measuring about 25 square mm. While this might seem small next to NVIDIA’s Vera Rubin chip, which boasts an impressive 576 tensor cores, the real distinction lies in how Neurophos is leveraging the photonic die.
A recent market analysis suggests the photonic chips for AI market could grow at nearly 20% annually from 2025 to 2029, driven by demands for higher speed, energy efficiency, and parallelism that traditional silicon alone cannot provide.
Photonic AI chips also promise improved energy efficiency and scaling. Research and industry prototyping show optical AI accelerators delivering orders-of-magnitude improvements in performance per watt compared with conventional GPU-based architectures, which struggle with rising power bills and heat constraints as AI models expand.
Major startups and research labs are pursuing this approach. Lightmatter, founded by MIT alumni, is developing photonics-based AI processors and high-speed interconnects that use light to move data between chip components efficiently. In parallel, other photonic quantum and optical computing efforts, such as a high-performance photonic quantum chip in China designed to accelerate complex calculations, are bringing photonics to the mainstream global investment.
As AI workloads continue to grow exponentially and Moore’s Law slows, innovations such as these optical transistors and photonic processors portend a fundamental shift in how computing power is delivered.