Colossus Goes Live

Elon Musk's xAI has activated its Colossus supercomputer — a 100,000 H100 GPU cluster that is by a significant margin the largest AI training system ever built. Located in Memphis, Tennessee, the facility draws 150 megawatts of power and required the construction of a dedicated natural gas peaker plant to guarantee uninterrupted supply. The scale is genuinely staggering.

For reference: OpenAI's GPT-4 was reportedly trained on approximately 25,000 A100 GPUs. Colossus dwarfs that by a factor of four, suggesting that whatever xAI trains on it will represent a meaningful capability step beyond current frontier models.

"Colossus is designed for one purpose: to build artificial general intelligence faster than anyone else on the planet." — Elon Musk, xAI Founder

Grok 3 and Beyond

The first model to benefit is Grok 3 — xAI's third-generation model, expected to launch in Q2 2026. Internal benchmarks leaked to Ars Technica suggest Grok 3 scores competitively with GPT-5 on reasoning tasks and demonstrates particularly strong performance on mathematics (scoring 94.2% on MATH benchmark) and advanced coding tasks. If the benchmarks are accurate, Grok 3 would represent a significant acceleration for xAI's competitive position.

The Energy Question

At 150 megawatts, Colossus consumes as much electricity as a small city. xAI's decision to rely partially on natural gas has drawn sharp criticism from climate advocates. Musk has pointed to future nuclear power agreements as a path to net-zero operations, but the timeline for those agreements remains unclear. The energy footprint of frontier AI training is becoming one of the most consequential environmental questions of the decade.