use case

TotalEnergies

TotalEnergies, one of the largest energy companies in the world, needs order-of-magnitude speedups on a wide range of simulations: batteries, biofuels, wind flows, drillings, and CO2 storage.

Our CS-2 system outperformed a modern AI GPU by more than 200X on a finite difference seismic modeling benchmark using code written in the Cerebras Software Language (CSL).

Read BlogRead SDK White Paper

Testimonial

"We count on the CS-2 system to boost our multi-energy research and give our research ‘athletes’ that extra competitive advantage."

Vincent Saubestre

CEO & President @ TotalEnergies Research & Technology USA

Use Case

Exploration geophysics

Geoscientists employ massive multi-modal datasets and computationally-intensive signal processing algorithms to map out subsurface environments.

With its massive 850,000 core wafer-scale sparse linear algebra processor, the CS-2 can process vastly larger datasets in a fraction of the time of traditional systems, enabling broader search and greater precision far faster than traditional CPU and GPU clusters.

use case

Reservoir modeling

AI and HPC methods are increasingly used together to build better models of subsurface physics and fluid flow. The unique combination of massive sparse compute, high bandwidth memory and communication of our wafer-scale engine make it the ideal processor for both types of work.

The CS-2 accelerates AI and HPC to improve reservoir modeling accuracy and enable more efficient operations with a reduced environmental footprint.

Use Case

Sensor data processing and predictive maintenance

State of the art energy facilities such as refineries, pipelines, power plants and distribution grids are packed with sensors gathering data to ensure safe and efficient operations. AI methods for time series modeling hold great potential to convert this data into insights, but computing at scale remains a challenge.

The CS-2 solves this by delivering cluster-scale compute with the programming ease of a single device, enabling researchers to quickly experiment with complex AI models and deploy the right solution in days-weeks instead of months-years.