Deep Learning Programming at Scale
Deep learning has become one of the most important computational workloads of our generation, advancing applications across industries from healthcare to autonomous driving. But it is also profoundly computationally intensive.
June 29, 2021
Limits to Scale-Out for Training Language Models
Natural language processing has revolutionized how data is consumed, meaning that computational demand has skyrocketed. Companies in every industry are using GPU clusters to keep up. But is this really the best solution?
June 24, 2021
Argonne National Laboratory
At Argonne National Laboratory, researchers work to gain a deeper understanding of our planet, our climate, and the cosmos. However, they were running into major challenges associated with scaling large AI models across a cluster of GPUs. Download the case study to find out how the research center overcame these challenges with Cerebras Systems.
June 8, 2021
Train Large BERT Models Faster with Cerebras Systems
Despite overwhelming evidence that training large BERT-type models on enormous, domain-specific datasets produces higher accuracy results, few organizations do it.
May 24, 2021
Cerebras Systems: Achieving Industry Best AI Performance Through A Systems Approach
An introduction to Cerebras as a company, including a discussion on the core innovations behind the Cerebras CS-2. What is it? How does it work? What does it enable for machine learning practitioners?
April 6, 2021