Customer Use Case

Case Study: Accelerating NLP Model Training and Enabling Higher Accuracy for Financial Services Applications

A project conducted by a leading financial services institution and Cerebras Systems demonstrated that training from scratch using domain-specific datasets for Financial Services applications could be made practical in an enterprise environment for the first time.

The CS-2 system delivered the compute performance of more than 120 AI-optimized GPUs. We were able to reduce training time by 15X, compared to a leading 8-GPU server, demonstrate dramatic improvements in model prediction confidence, while almost halving energy consumption. The results of this report show a promising path to accelerate research and AI-powered capability development for financial services enterprises.

Read the Case Study
use case

Text and document analysis

State of the art natural language models have the potential to put deep insights from massive news, literature, or records databases at analysts fingertips in an instant. However, these models often take days, weeks, or even months to train on legacy GPU clusters that require parallel programming. With the CS-3 system, enterprise researchers and analysts can develop models and get answers back orders of magnitude faster to deliver AI insights ahead of the market.

Use Case

Fraud detection

Fraud is costly and increasingly challenging to detect, making its detection and management one of the most attractive AI for financeial services. Fortunately, massive volumes of transaction data and AI models can be combined to alert customers’ and analysts’ attention to suspicious activity before damage is done. Cerebras’ revolutionary WSE deep learning processor makes training and running state of the art language, time series, and graph models in production happen orders of magnitude faster than legacy computer systems with the programming ease of a single desktop machine.

Use Case

Algorithmic trading and management

Data and AI are changing the way we engage with the market and manage our portfolios, but handling large datasets at low latency is a must. The WSE’s massive compute combined with supercomputer-impossible memory and communication bandwidth enable orders of magnitude higher data throughput and lower computation time than legacy distributed processor architectures for this work.

Ready to get started?

Click Here