Based in Sunnyvale, California, Cerebras develops specialized AI chips focused on high-speed inference, aiming to significantly reduce model response times. What sets the company apart is its use of an entire silicon wafer as a single chip—an approach known as the Wafer Scale Engine (WSE). Its current flagship product is the WSE-3.

The fundraising was likely boosted by Cerebras’ recently announced deal with OpenAI worth more than $10 billion. Under the agreement, the AI lab plans to acquire 750 megawatts of compute capacity over three years to support ChatGPT. OpenAI is widely expected to use the infrastructure to accelerate response times for its reasoning and coding models, which deliver leading performance but are relatively slow. OpenAI CEO Sam Altman recently promised “dramatically faster” response times, particularly in the context of the Codex coding model.