Hello Nvidia. Here comes Cerebras, the new darling of AI compute, with a $10 billion OpenAI contract and a new $1 billion in ...
GPT-5.3-Codex-Spark may be a mouthfull, but it's certainly fast at 1,000 Tok/s running on Nvidia rival's CS3 accelerators Nvidia and AMD can take a seat. On Thursday, OpenAI unveiled ...
OpenAI launches GPT‑5.3‑Codex‑Spark, a Cerebras-powered, ultra-low-latency coding model that claims 15x faster generation speeds, signaling a major inference shift beyond Nvidia as the company faces ...
OpenAI has spent the past year systematically reducing its dependence on Nvidia. The company signed a massive multi-year deal ...
OpenAI on Thursday released GPT-5.3-Codex-Spark, its first AI model served on chips from Cerebras Systems, marking the ChatGPT maker’s first production deployment on non-Nvidia silicon.
SUNNYVALE, Calif. & VANCOUVER, British Columbia--(BUSINESS WIRE)--Today at NeurIPS 2024, Cerebras Systems, the pioneer in accelerating generative AI, today announced a groundbreaking achievement in ...
Third Generation 5nm Wafer-Scale Engine (WSE-3) Powers Industry’s Most Scalable AI Supercomputers, Up To 256 exaFLOPs via 2048 Nodes SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras Systems, the pioneer ...
Cerebras Systems Inc., a startup providing ultra-fast artificial intelligence inference, today announced support for OpenAI’s newly released 120 billion-parameter open-weight reasoning model, ...
Sunnyvale, CA — Meta has teamed with Cerebras on AI inference in Meta’s new Llama API, combining Meta’s open-source Llama models with inference technology from Cerebras. Developers building on the ...
Fast Deep Coder pairs a Cerebras-accelerated reasoning model with NinjaTech’s SuperNinja VM environment that enables developers to build, run, fix, and ship code at speeds previously unattainable. By ...