The Indian Institute of Science (IISc) has announced a breakthrough in artificial intelligence hardware by developing a brain-inspired neuromorphic computing platform. Capable of storing and processing data across 16,500 conductance states in a molecular film, this new platform represents a dramatic leap over traditional digital systems, which are limited to just two states (on and off).
Sreetosh Goswami, assistant professor at the Centre for Nano Science and Engineering (CeNSE), IISc, who led the research team that developed this platform, said that with this discovery, the team has been able to nail down several unsolved challenges that have been lingering in the field of neuromorphic computing for over a decade.
The innovation is seen as a potential game-changer for AI hardware, offering energy efficiency, speed, and performance improvements that could unlock new possibilities for deploying AI at scale.
“The chip will be ready in the next two to three years, and we are also planning to form a startup to take it to the market,” Goswami said.
What makes this innovation different?
The IISc team’s neuromorphic platform is designed to address some of the biggest challenges facing AI hardware today: energy consumption and computational inefficiency. In traditional digital AI processors, tasks such as training large language models (LLMs) are extremely resource-intensive, requiring significant computational power and energy, often limiting them to data centers with high energy availability.
In contrast, IISc’s analog computing platform promises to dramatically cut both the time and energy required for such tasks. This is primarily due to its ability to store and process data in 16,500 different states within a single device, a massive improvement over binary systems that rely on just two states.
Goswami explained how this innovation fundamentally changes how AI algorithms are executed. “In all training processes, the core mathematical operation is vector-matrix multiplication,” Goswami said. “On a digital platform, multiplying a vector of size n by an n x n matrix takes n² steps. In contrast, our accelerator executes this in a single step. This reduction in computational steps directly translates to a substantial gain in energy efficiency.”
The energy efficiency of the new platform is especially impressive. According to a comparison cited by Goswami, the platform’s dot product engine delivers 4.1 TOPS/W, making it 460 times more efficient than an 18-core Haswell CPU and 220 times more efficient than an Nvidia K80 GPU, which is commonly used in AI workloads.
The rise of neuromorphic computing
Neuromorphic computing is an advanced field of computing that mimics the architecture and processes of the human brain. Instead of using traditional digital methods that rely on binary states (0s and 1s), neuromorphic systems utilize analog signals and multiple conductance states to process information more like neurons in a biological brain.
At the heart of IISc’s innovation is the platform’s ability to handle 16,500 conductance states. To represent more complex data, these systems must combine multiple binary states, which increases the time and energy required for processing.
“With our approach, a single device can store and process data across 16,500 levels in one step,” Goswami said. This makes the process highly space-efficient and allows for parallelism in computation, which speeds up AI workloads significantly.
These systems are designed to perform tasks such as pattern recognition, learning, and decision-making more efficiently than conventional computers. By integrating memory and processing into a single unit, neuromorphic computing promises faster, more energy-efficient solutions for complex tasks such as AI, particularly in areas like machine learning, data analysis, and robotics.
This level of precision is essential for training advanced AI models, which require high-accuracy computations. “The 14-bit accuracy our accelerator offers ensures robust training performance,” added Goswami. This precision, combined with the platform’s ability to handle complex data in fewer steps, could position IISc’s innovation as a critical enabler for faster and more efficient AI models in the future.
Apart from IISc, several leading technology companies and research institutions are actively working on neuromorphic computing technology. Intel, for example, has developed its Loihi neuromorphic chip, which uses spiking neural networks to mimic brain functions, and IBM’s TrueNorth is another pioneering effort in this space, capable of simulating millions of neurons and synapses.
Meanwhile, universities such as Stanford and MIT are conducting research to advance neuromorphic computing and integrate it with AI applications. The current state of neuromorphic computing is promising but still in the experimental phase, with most developments focused on research prototypes rather than widespread commercial applications.
However, as AI and machine learning tasks grow more demanding, neuromorphic chips are seen as a potential breakthrough for achieving greater energy efficiency and speed, especially in areas like robotics, autonomous systems, and real-time data processing.
Integration with silicon-based systems
Despite its novel approach, IISc’s platform is designed to work alongside existing AI hardware, rather than replace it. Neuromorphic accelerators like the one developed by IISc are particularly well-suited for offloading tasks that involve repetitive matrix multiplication — a common operation in AI.
“GPUs and TPUs, which are digital, are great for certain tasks, but our platform can take over when it comes to matrix multiplication. This allows for a major speed boost,” explained Goswami. This hybrid approach of using both digital and analog systems could pave the way for AI models that are both more powerful and more energy efficient.
As the demand for more advanced AI models increases, existing digital systems are nearing their energy and performance limits. Silicon-based processors, which have driven AI advancements for years, are starting to show diminishing returns in terms of speed and efficiency.
“With silicon electronics reaching saturation, designing brain-inspired accelerators that can work alongside silicon chips to deliver faster, more efficient AI is becoming crucial,” Goswami noted. By working with molecular films and analog computing, IISc is offering a new path forward for AI hardware, one that could dramatically cut energy consumption while boosting computational power.
The team has already demonstrated the platform’s capabilities by recreating NASA’s iconic “Pillars of Creation” image from the James Webb Space Telescope. What would have required a supercomputer was completed on a tabletop computer using IISc’s platform and at a fraction of the time and energy traditionally needed.
Looking ahead: A fully indigenous neuromorphic chip
The IISc team is now focused on taking the innovation further by developing a fully indigenous integrated neuromorphic chip. Supported by India’s Ministry of Electronics and Information Technology, this effort is a home-grown endeavor that spans materials, circuits, and systems.
“This is a completely home-grown effort, from materials to circuits and systems,” said Goswami, who is confident the platform will be turned into a system-on-a-chip solution.
The researchers believe their innovation could be a game-changer for industries relying on AI, from cloud computing to autonomous vehicles.
As the global demand for energy-efficient AI hardware grows, IISc’s brain-inspired computing platform could offer a viable solution that bridges the gap between silicon’s limitations and the need for more powerful AI. With its novel approach to energy efficiency, storage capacity, and processing speed, IISc’s analog platform stands poised to reshape the future of AI hardware.
Source:: Network World