
Lightmatter has announced new silicon photonics products that could dramatically speed up AI systems by solving a critical problem: the sluggish connections between AI chips in data centers.
The company’s newly unveiled Passage L200 and M1000 platforms use light instead of electricity to move data, potentially unlocking major performance gains for companies running large AI models, the company said in a statement.
For enterprises investing heavily in AI infrastructure, this development addresses a growing challenge. As GPU processing power increases, the connections between these processors have become the primary limitation. Today’s AI chips often sit idle waiting for data to arrive, wasting computing resources and slowing down results.
Lightmatter’s solution includes two products: the Passage L200 co-packaged optics (CPO) and the Passage M1000 reference platform. The L200, coming in 2026, will be available in 32 Tbps and 64Tbps versions.
The 64Tbps model enables packaging multiple GPUs on a single chip with more than 200 terabytes per second of data bandwidth.
“This enables over 200 Tbps of total I/O bandwidth per chip package, resulting in up to 8X faster training time for advanced AI models,” the company said.
Customers can expect the M1000 reference platform in the summer of 2025, allowing them to develop custom GPU interconnects.
The phased release gives enterprises time to evaluate how optical interconnect technology might fit into their future infrastructure roadmaps.
How it works
Unlike traditional chip connections that can only exchange data at their edges, Lightmatter’s technology enables what they call “edgeless I/O” — allowing connections across the entire surface of a chip. This approach integrates optical fiber directly into silicon packaging.
The technology comes in two forms: a chiplet that sits on top of AI processors, and an interposer layer that processors sit upon. By replacing electrical connections with optical ones, data can move up to 100 times faster between chips, eliminating delays that currently plague AI computing clusters.
For businesses running complex AI workloads that require thousands of GPUs working together, this could translate to faster model training, more responsive AI applications, and more efficient use of expensive computing resources.
“As silicon photonics uses light instead of electricity for interconnects, Lightmatter’s technology has an edge in terms of offering better bandwidth, energy efficiency, and improvements in latency,” said Kasthuri Jagadeesan, research director at Everest Group. “With co-packaged optics, that is, by integrating optics directly with GPUs/accelerators, Lightmatter is better positioned compared to competing solutions based on pluggable or board-level interconnects.”
Industry implications
The introduction of silicon photonics into AI infrastructure represents a potential shift in how data centers are designed. Traditional data centers connect GPUs through a hierarchy of networked switches, creating latency as data travels through multiple points to reach its destination. Lightmatter’s approach could flatten this architecture.
This matters particularly for large language models (LLMs) and other advanced AI applications that require massive computational resources working in concert. As these models grow in complexity, the ability to efficiently move data between processing units becomes increasingly critical to overall system performance.
The technology could also impact energy consumption in data centers. Optical connections typically require less power than their electrical counterparts, potentially offering efficiency gains in addition to performance improvements.
Lightmatter, valued at $4.4 billion after raising $850 million in venture funding, isn’t alone in pursuing optical computing solutions. AMD has demonstrated similar technologies, while Nvidia has begun incorporating optical connections in some networking products.
What distinguishes Lightmatter’s approach is its focus on integrating photonics directly with AI processors rather than just networking equipment.
Industry experts see this as part of a larger trend.
“Silicon photonics can transform HPC, data centers, and networking by providing greater scalability, better energy efficiency, and seamless integration with existing semiconductor manufacturing and packaging technologies,” Jagadeesan added. “Lightmatter’s recent announcement of the Passage L200 co-packaged optics and M1000 reference platform demonstrates an important step toward addressing the interconnect bandwidth and latency between accelerators in AI data centers.”
The market timing appears strategic, as enterprises worldwide face increasing computational demands from AI workloads while simultaneously confronting the physical limitations of traditional semiconductor scaling. Silicon photonics offers a potential path forward as conventional approaches reach their limits.
Practical applications
For enterprise IT leaders, Lightmatter’s technology could impact several key areas of infrastructure planning. AI development teams could see significantly reduced training times for complex models, enabling faster iteration and deployment of AI solutions. Real-time AI applications could benefit from lower latency between processing units, improving responsiveness for time-sensitive operations.
Data centers could potentially achieve higher computational density with fewer networking bottlenecks, allowing more efficient use of physical space and resources. Infrastructure costs might be optimized by more efficient utilization of expensive GPU resources, as processors spend less time waiting for data and more time computing.
These benefits would be particularly valuable for financial services, healthcare, research institutions, and technology companies working with large-scale AI deployments. Organizations that rely on real-time analysis of large datasets or require rapid training and deployment of complex AI models stand to gain the most from the technology.
“Silicon photonics will be a key technology for interconnects across accelerators, racks, and data center fabrics,” Jagadeesan pointed out. “Chiplets and advanced packaging will coexist and dominate intra-package communication. The key aspect is integration, that is companies who have the potential to combine photonics, chiplets, and packaging in a more efficient way will gain competitive advantage.”
Looking ahead, analysts project significant growth for this technology. “By 2030, advances in photonic AI chips, optical interconnects, and co-packaged optics will propel the widespread adoption of silicon photonics in AI, telecommunication, quantum computing, and autonomous driving applications,” Jagadeesan added. “If Lightmatter can handle performance and scalability aspects and if they are able to address integration challenges, Passage L200 and M1000 platform can create an impact in next-gen AI data center fabrics, especially in the 2025–2030 timeframe.”
Source:: Network World