Chip, Machine Learning, System, Cloud

Announcing Cerebras Cloud @ Cirrascale, Democratizing High-Performance AI Compute

Gil Haberman, Sr. Director of Product Marketing | September 16, 2021

Democratizing High-Performance AI Compute  

Today, we are thrilled to announce  the availability of Cerebras Cloud @ Cirrascale, delivering the world’s fastest AI accelerator as a cloud service! Nearly every day, we engage with Machine Learning (ML) scientists and engineers who are looking to push the frontiers of deep learning but find themselves constrained by long training times of existing offerings. In contrast, our solution has been built from the ground-up for AI. It delivers hundreds or thousands of times more performance than alternatives – enabling data scientists and ML practitioners to train and iterate on large, state of the art models in minutes or hours rather than days or weeks.

Many of our early commercial and government customers chose to deploy Cerebras’ systems directly into their on-premises data centers to accelerate cutting-edge R&D in areas such as drug discovery and natural language processing (NLP). Our new Cerebras Cloud offering with Cirrascale dramatically expands our reach to more organizations – ranging from innovative startups to Fortune 500 – bringing the unparalleled AI performance of Cerebras CS-2 system to more users. This is an important step in truly democratizing high-performance AI compute!

 

 

Dream Big, with the Most Powerful AI at Your Fingertips

In building the Cerebras CS-2, every design choice has been made to accelerate deep learning, reducing training times and inference latencies by orders of magnitude. The CS-2 features 850,000 AI optimized compute cores, 40GB of on-chip SRAM, 20 PB/s memory bandwidth and 220Pb/s interconnect, fed by 1.2 Tb/s of I/O across 12 100Gb Ethernet links.

Now, with Cerebras Cloud @ Cirrascale, this system is available right at your fingertips. Cerebras Cloud is available in weekly or monthly flat-rate allotments, and as you grow there are discounts offered for longer-term, predictable usage. In our experience, as users observe the blazing fast performance of the CS-2, ideas for new model and experiments emerge – such as training from scratch on domain-specific datasets, using more efficient sparse models, or experimenting with smaller batch sizes – resulting in better-performing models in production and accelerated pace of innovation.

Integrate with Your Environment, Reduce Operational Burden

Getting started with Cerebras Cloud is easy. Our Software Platform integrates with popular machine learning frameworks like TensorFlow and PyTorch, so you can use familiar tools to get started running models on the CS-2 right away. The Cerebras Graph Compiler automatically translates your neural network from your framework representation into a CS-2 executable, optimizing compute, memory, and communication to maximize utilization and performance.

This approach also aims to dramatically simplify daily operations. The CS-2 systems that power Cerebras Cloud deliver cluster-scale performance with the programming simplicity of a single node. Whether the model is large or small, our compiler optimizes execution to get the most out of the system. As a result, cluster orchestration, synchronization and model tuning are eliminated, letting you focus on innovation rather than cluster management overhead.

And for those of you with data stored in other cloud services, our friends at Cirrascale can easily integrate the Cerebras Cloud with your current cloud-based workflow to create a secure, multi-cloud solution. They will handle the setup and management, so you can focus on deep learning.

Want to learn more? Get started with a Cerebras Cloud @ Cirrascale now!