AI Compute
At The Speed Of Light
Replace 100 GPUs with a single processor
Today’s AI compute demand is skyrocketing, driven by the massive requirements for training and inference in large language models (LLMs). To meet this surging demand, the world requires a 100x performance increase in AI compute capabilities.
We are still at the early stages of this exponential demand curve. Already, only a few companies can afford the enormous costs associated with acquiring the necessary GPUs and TPUs. But even beyond cost, constraints related to real estate and power consumption mean that this approach will soon become untenable.
The solution lies in achieving a 100x increase in compute density within the same power and space constraints
Harness the Speed of Light
Neurophos has completely redesigned the core of photonic computers—optical modulators—reducing their size by a factor of 10,000 compared to today's designs.
This breakthrough enables us to breathe life into a previously infeasible 3D photonic computing concept, paving the way for general AI inference hardware that can carry the workload of 100 leading GPUs while consuming only 1% of the power.
Neurophos combines its ultra dense and fast in-house optical modulators with an optics stack that lets chips sit on the same plane, avoiding interconnect issues.
This design enables the OPU to achieve true 160'000 TOPS at an unprecedented 300 TOPS per watt; 100x faster and 100x more efficient than leading GPUs.