TECHNOLOGY
Exponential Power, Performance and Size Advantage
Neurophos technology decreases the size and energy needs of silicon photonic optical chips for inferencing on the LLM (Large Language Models) that are at the heart of artificial intelligence.
We do this through two breakthroughs: a new metamaterial and an innovative processor for AI inferencing.
The Neurophos Metasurface
Our optical metasurface enables silicon photonic computing capable of ultra-fast AI inference that outstrips the density and performance of both traditional silicon computing and other silicon photonics solutions.
The density of the metasurface, combined with the velocity speed of our silicon photonics modulators, enables our fast powerful, efficient, processor.
Our remarkable advances can be manufactured using the same mature complementary metal-oxide semiconductor (CMOS) processes that are used for creating larger node processor chips.
Compute-In-Memory
We integrate high-speed silicon photonics to feed our high-density metasurface compute-in-memory (CIM) processor, leveraging its high modulation speeds.
This innovative CIM processor architecture delivers fast, efficient, vector matrix-matrix multiplications, which make up the overwhelming majority of operations when running AI neural networks.