CS-2: A Revolution in AI Infrastructure
You can’t achieve revolutionary performance gains if you’re limited by standard chip packaging. Every detail of the CS-2 system — from power and cooling to packaging and data delivery — has been carefully engineered to drive our second-generation wafer-scale engine (the WSE-2). This means no compromises from us, and peak performance with no large-scale cluster deployment complexity for you.
Cluster-scale Performance in a Single System
A single CS-2 typically delivers the wall-clock compute performance of many tens to hundreds of graphics processing units (GPU), or more. In one system less than one rack in size, the CS-2 delivers answers in minutes or hours that would take days, weeks, or longer on large multi-rack clusters of legacy, general purpose processors.
At 15 RU, and peak sustained system power of 23kW, the CS-2 packs the performance of a room full of servers into a single unit the size of a dorm room mini-fridge. With cluster-scale compute available in a single device, you can push your research further – at a fraction of the cost.
Datacenter-scale AI in a CS-2 Cluster
Harness the AI performance of a supercomputer with a cluster of CS-2 machines. Multiple CS-2 machines can be clustered together to scale up throughput for further training and inference acceleration and support multi-billion to even trillion parameter models. Large-scale data centers and supercomputers typically have hundreds to thousands of nodes, take months or years to build, occupy facilities the size of airport terminals, and often draw more than 10MW. By comparison, each individual CS-2 machine provides the compute-equivalent of 10s-100s of traditional nodes. This means you can deploy datacenter-scale AI compute to unlock world leading innovation in just a few days or weeks – delivering greater performance in a space- and power-efficient package built for the job.
Purpose-Built Design, Carrier-Grade Reliability
You wouldn’t put a racecar engine in an economy car chassis. The CS-2 is designed from the ground up to power, cool, and deliver data to our revolutionary WSE-2 processor so that it can deliver unparalleled performance to users; all in a package that is easy to deploy, operate, and maintain in your datacenter today.
At the heart of the CS-2 system is an innovative wafer packaging solution we refer to as the engine block. The engine block delivers power straight into the face of the wafer to achieve the required power density that could not be achieved with traditional packaging. It provides uniform cooling for the wafer via a closed internal water loop, and is subsequently cooled by facility water or air. All pumps, fans, and power supplies are redundant and hot-swappable so you stay up-and-running at full performance.
Revolutionary AI Compute in a Standards-Based System
The CS-2 is easily installed into a standard datacenter infrastructure — from loading dock to users’ hands in a few days rather than weeks or months that is typically required for traditional cluster provisioning.
The CS-2 uses standards-based power and network connections to seamlessly integrate into your existing systems. It connects to surrounding infrastructure over 12x standard 100 Gigabit Ethernet links and converts standard TCP-IP traffic into Cerebras protocol at full line rate to feed the WSE-2’s 850,000 cores. Simply plug CS-2 into power and a 100 Gb Ethernet switch, and you’re ready to start accelerating your AI workloads.
Massive Gains in Space and Power Efficiency
For AI researchers and data scientists, the CS-2 certainly delivers the ability to test more ideas per unit time with unmatched AI compute performance. In a world cognizant of the costs of AI computing, the CS-2 delivers these performance gains in a far more space and power efficient package. For typical customer workloads running today, the CS-2 delivers approximately 100x wall-clock compute advantage vs. A GPU at 1/5th and 1/3rd total power.