The Fastest AI. Easy to Use.
We’ve built the fastest AI accelerator, based on the largest processor in the industry, and made it easy to use. With Cerebras, blazing fast training, ultra low latency inference, and record-breaking time-to-solution enable you to achieve your most ambitious AI goals.
Go Ahead – Reduce the Cost of Curiosity.
The CS-2, The Fastest AI Accelerator in the World
Purpose built for AI, the CS-2 replaces an entire cluster of graphics processing units (GPUs). Gone are the challenges of parallel programming, distributed training, and cluster management. From chip to system to software – every aspect of the CS-2 is optimized to accelerate and to simplify AI work. The CS-2 produces answers in less time.
Wafer Scale Engine: The Largest Chip Ever Built
The Wafer Scale Engine (WSE-2) is the largest chip ever built and powers the CS-2. The WSE-2 is 56 times larger than the largest GPU, has 123 times more compute cores, and 1000 times more high performance on-chip memory. The only wafer scale processor ever produced, it contains 2.6 trillion transistors, 850,000 AI-optimized cores, and 40 gigabytes of high performance on-wafer memory all aimed at accelerating your AI work.
2.6 Trillion Transistors
Customer Success with Cerebras
Argonne National Lab is speeding up cancer research.
“We have a cancer-drug response prediction model that’s running many hundreds of times faster on [Cerebras] than it runs on a conventional GPU. We are doing in a few months what would normally take a drug development process years to do.”
– Rick Stevens, Associate Director, Argonne National Lab
GlaxoSmithKline is exploring more ideas in less time.
“The Cerebras system will be critical in the development of next generation ML to uncover next set of more viable drug targets. The incredible power of the Cerebras architecture allows us to explore these new frontiers and decode the language of the cell.”
– Kim Branson, Head of AI R&D, GlaxoSmithKline
AstraZeneca is training AI in 2 days instead of weeks.
“Cerebras opens the possibility to accelerate our AI efforts, ultimately helping us understand where to make strategic investments in AI. Training which historically took over 2 weeks to run on a large cluster of GPUs was accomplished in just over 2 days – 52 hours to be exact.”
– Nick Brown, Head of AI Engineering, AstraZeneca