Jensen Huang keynote at CES 2025 [video]

youtube.com

152 points by subharmonicon 6 days ago


ksec - 2 days ago

I actually got a lot more upbeat about the potential of AI after watching this keynote more so than any other demonstration or presentation.

Blackwell is still based on N4, a derivative of 5nm. We know we will have N3 GPU next year, and they should be working with TSMC on capacity planning for N2. Currently Blackwell is pretty much limited by capacity of either TSMC, HBM or packaging. And unless the AI hype dies off soon ( which I doubt will happen ), we should still have at least another 5 years of these GPU improvements. We will have another 5 - 10x performance increase.

And they have foreshadowed their PC entry with MediaTek Partnership. ( I wonder why they dont just acquired Mediatek ) and may be even Smartphone or Tablet with Geforce GPU IP.

The future is exciting.

sliken - a day ago

The Nvidia Project Digits looked rather interesting. Much like a Apple studio ultra. 2 dies, "high" bandwidth unified memory, 20 cores, tiny mac mini sized case, and 2 SFP+ for 2 connectX ports (unsure if that's IB or ethernet).

Claim it will be quite good at AI (1 Tflop of fp4), but sadly don't mention the memory bandwidth. It's somewhere in the range of awesome to terrible depending on the bandwidth.

dang - a day ago

Related ongoing thread:

Ask HN: Pull the curtain back on Nvidia's CES keynote please - https://news.ycombinator.com/item?id=42670808 - Jan 2025 (2 comments)

magicalhippo - 6 days ago

AMD barely mentioned their next-gen GPUs, NVIDIA came out swinging right from the start. AMD announced two new models which by their own cryptic slide wouldn't even compete with their current top-end. Then NVIDIA came and announced a 4090-performance GPU for $549...

If that's not just hot hair from NVIDIA, I totally get the business decision from AMD but man, would love some more competition in the higher end.

contingencies - a day ago

As someone who has gone pretty deep in to robotics over the last 9 years I skipped right to the physical AI portion, and wasn't impressed.

This has been stated on HN in most robotics threads, but the core of what they show, once again, is content generation, a feature largely looking for an application. The main application discussed is training data synthesis. While there is value in this for very specific use cases it's still lipstick ("look it works! wow AI!") on a pig (ie. non-deterministic system being placed in a critical operations process). This embodies one of the most fallacious, generally unspoken assumptions in AI and robotics today - that it is desirable to deal with the real world in an unstructured manner using fuzzy, vendor-linked, unauditable, shifting sand AI building blocks. This assumption can make sense for driving and other relatively uncontrolled environments with immovable infrastructure and vast cultural, capital and paradigm investments demanding complex multi-sensor synthesis and rapid decision making based on environmental context based on prior training, but it makes very little sense for industrial, construction, agricultural, rural, etc. Industrial is traditionally all about understanding the problem or breaking it in to unit operations, design, fabricate and control the environment to optimize the process for each of those in sequence, and thus lowering the cost and increasing the throughput.

NVidia further wants us to believe we should buy three products from them: an embedded system ("nano"), a general purpose robotic system ("super") and something more computationally expensive for simulation-type applications ("ultra"). They claim (with apparently no need to proffer evidence whatsoever) that "all robotics" companies need these "three computers". I've got news for you: we don't, this is a fantasy, and limited if any value add will result from what amounts to yet another amorphous simulation, integration and modeling platform based on broken vendor assumptions. Ask anyone experienced in industrial, they'll agree. The industrial vendor space is somewhat broken and rife with all sorts of dodgy things that wouldn't fly in other sectors, but NVidia simply ain't gonna fix it with their current take, which for me lands somewhere between wishful thinking and downright duplicitous.

As for "digital twins", most industrial systems are much like software systems: emergent, cobbled together from multiple poor and broken individual implementations, sharing state across disparate models, each based on poorly or undocumented design assumptions. This means their view of self-state, or "digital twin", is usually functionally fallacious. Where "digital twins" can truly add value is in areas like functional safety, where if you design things correctly you avoid being mired in potentially lethally disastrous emergent states from interdependent subsystems that were not considered at subsystem design, maintenance or upgrade time because a non-exhaustive, insufficiently formal and deterministic approach was used in system design and specification. This very real value however hinges on the value being delivered at design time, before implementation, which means you're not going to be buying 10,000 NVidia chips, but most likely zero.

So my 2c is the Physical AI portion is basically a poorly founded forward-looking application sketch from what amounts to a professional salesman in a shiny black crocodile jacket at a purchased high-viz keynote. Perhaps the other segments had more weight.

behnamoh - 6 days ago

I liked this part:

    "one small step at a time, and one giant leap, together."
I didn't like this part:

    5090 for $2000, about $500 more than 4090 when it was announced.
They didn't mention VRAM amount though, and I doubt it's more than 24GB. If Apple M4 Ultra gets close to 1.8 TB/s bandwidth of 5090, it'll crush GeForce once and for all (and for good).

Also nitpick: the opening video said tokens are responsible for all AI, but that only applies to a subset of AI models...

- 6 days ago
[deleted]
blitzar - 6 days ago

[flagged]

seaofliberty - 6 days ago

[flagged]