top of page

TECHNOLOGY

Andrew - optical bench _edited.png

Exponential Power, Performance and Size Advantage

Neurophos technology decreases the size and energy needs of silicon photonic optical chips for inferencing on the LLM (Large Language Models) that are at the heart of artificial intelligence.

 

We do this through two breakthroughs: a new metamaterial and an innovative processor for AI inferencing. 

The Neurophos Metasurface

Our optical metasurface enables silicon photonic computing capable of ultra-fast AI inference that outstrips the density and performance of both traditional silicon computing and other silicon photonics solutions. 

 

The density of the metasurface, combined with the velocity speed of our silicon photonics modulators, enables our fast powerful, efficient, processor. 

Our remarkable advances can be manufactured using the same mature complementary metal-oxide semiconductor (CMOS) processes that are used for creating larger node processor chips.

Compute-In-Memory 

We integrate high-speed silicon photonics to feed our high-density metasurface compute-in-memory (CIM) processor, leveraging its high modulation speeds. 

 

This innovative CIM processor architecture delivers fast, efficient, vector matrix-matrix multiplications, which make up the overwhelming majority of operations when running AI neural networks.

NP1_edited.jpg

Performance

The unprecedented increase in computational density enabled by our metasurface provides a major advantage over both conventional processors and other silicon photonic solutions.  We deliver orders of magnitude higher speeds than state of the art GPUs today.
 

Power

In optical computing, energy efficiency is proportional to array size, so a Neurophos processor is hundreds of times more energy efficient than alternatives. 

Neurophos enables this technology to be used in AI data centers. That market currently uses traditional silicon semiconductors that create enormous amounts of heat and are struggling to scale to the performance demands of AI inferencing. 

Size

Our optical compute-in-memory elements are thousands of times smaller than traditional silicon photonics modulators, allowing our architecture to process vastly larger matrices on chip.

This results in an unprecedented increase in computational density. Neurophos’ metamaterial-based optical modulators are more than 1000 times smaller by area than those from a standard foundry PDK (Process Design Kit). 

bottom of page