Stock

Nvidia’s Jensen Huang Unveils ‘Vera Rubin’ Platform and Blackwell Ultra, Declares AI’s True Compute Demand

At Nvidia’s GTC developer event in San Jose, CEO Jensen Huang made a bold statement: “Almost the entire world got it wrong” about AI’s computational needs. As Nvidia cements its dominance in the AI industry, the company unveiled its latest innovations—signaling an unprecedented demand for high-performance inference hardware.

Introducing ‘Vera Rubin’: Nvidia’s First Custom CPU & New GPUs

One of the most significant announcements at GTC was the debut of the Vera Rubin platform, featuring Nvidia’s first custom-designed CPU, ‘Vera’, alongside two new ‘Rubin’ GPUs. Named after astronomer Vera Rubin, who provided key evidence of dark matter, this platform is built to redefine AI inference capabilities.

According to Huang, Vera is twice as fast as the Arm-based CPU in last year’s Blackwell architecture. Combined with Rubin GPUs, the platform delivers over double the inference performance of Blackwell, addressing the soaring computational needs of next-gen AI applications.

Blackwell Ultra: More Power, More Memory, More AI Dominance

Nvidia also announced Blackwell Ultra, an enhanced version of its flagship Blackwell GPU, set for launch in the second half of 2025. This iteration brings significant boosts in compute power and memory, further reinforcing Nvidia’s role as the backbone of AI model training and inference.

AI’s Compute Demand Is ‘100x More Than We Thought’

Huang took aim at the narrative surrounding DeepSeek, the Chinese AI startup that suggested powerful AI models could be trained with fewer, lower-tier GPUs. He dismissed this notion outright, stating that real-time AI reasoning and agentic AI require 100 times more computation than previously estimated.

This revelation underscores why Nvidia continues to push the boundaries of hardware design—focusing on accelerating inference, the critical process where AI models generate responses in real time.

Project Digits: Bringing AI Power to the Desktop

Expanding its AI ecosystem, Nvidia introduced Project Digits, a new AI desktop platform for developers, researchers, and students. Available in two versions—GTX Spark (compact) and GTX Station (high-performance)—these devices aim to bring serious AI capabilities to individual users. Both are expected to launch this summer.

Nemotron: Nvidia’s Open-Source AI Model Family

Nvidia is not stopping at hardware. The company is now pushing into AI applications with Nemotron, a new family of open-source, mid-size reasoning models based on Meta’s Llama. Designed for lightweight enterprise tasks, Nemotron runs efficiently on Nvidia hardware, reinforcing the company’s strategy to keep users within its ecosystem.

The Future of AI Belongs to Nvidia

From cutting-edge inference chips to AI desktops and open-source models, Nvidia’s latest announcements reaffirm its position at the heart of the AI revolution. As inference workloads skyrocket, Nvidia’s Vera Rubin, Blackwell Ultra, and Nemotron models ensure that the company remains indispensable to the future of artificial intelligence.

If there is any problem with this article or you need to get something corrected then update us on email: sgenterprisesweb@gmail.com

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
close