Cerebras is a computer systems company dedicated to accelerating deep learning.
The pioneering Wafer-Scale Engine (WSE) – the largest chip ever built – is at the heart of our deep learning system, the Cerebras CS-1.
56x larger than any other chip, the WSE delivers more compute, more memory, and more communication bandwidth. This enables AI research at previously-impossible speeds and scale.
56x the size of the largest Graphics Processing Unit
The Cerebras Wafer Scale Engine is 46,225 mm2 with 1.2 Trillion transistors and 400,000 AI-optimized cores.
By comparison, the largest Graphics Processing Unit is 815 mm2 and has 21.1 Billion transistors.
The Cerebras software stack is designed to meet users where they are, integrating with open source ML frameworks like TensorFlow and PyTorch. Our software makes cluster-scale compute resources available to users with today's tools.
Learn moreProvides faster time to solution, with cluster-scale resources on a single chip and with full utilization at any batch size, including batch size 1
Runs at full utilization with tensors of any shapes, fat, square and thin, dense and sparse, enabling researchers to explore novel network architectures and optimization techniques
Provides flexibility for parallel execution, supports model parallelism via layer-pipeline out of the box
Translates sparsity in model and data into performance, via a vast array of programmable cores and flexible interconnect