Kioxia and Dell cram 10 PB into slim 2RU server
blocksandfiles.com45 points by rbanffy 3 hours ago
45 points by rbanffy 3 hours ago
At current enterprise NVMe prices, the drives alone for this must easily push past the $500k to $1M mark. It's fascinating to see this level of density, but it’s strictly going to be hyperscaler or high-end defense/research budget territory for a long time.
The very first sentence of this article mistakes Terabytes and Petabytes. I used to dismiss the entire article as poor quality on seeing a mistake like this. But these days it also feels like an indicator the article was written by a human and might actually have something interesting to say.
Sadly not in this case though - the Kioxia drives are interesting, but the fact that Dell has put some in a box is much less so.
This is one of the case limited by PCIe speed, sharing it with SSD so Network could only do 5x400Gbps Network. This is on PCIe 5.0, luckily we have 7.0 spec ready and 8.0 is even in 0.5 draft status.
If we could somehow increase the density further by 5x, we would be able to store 1EB in a single rack.
The most interesting part to me is the last sentence.
>Scality tells us it’s working on supporting a future nearline-class SSD from Samsung, viewed as an HDD killer, with similar or even larger capacity and a roadmap out to a 1 PB drive.
Finally a HDD killer. May be in another 5 - 10 years time. The day of everyone having an SSD NAS / AI Cloud at home will come.
There's been a lot of talk about orbital DCs lately, but with these levels of density, orbital CDNs might be a more obvious usecase. It would be interesting to see if something like Starlink can use something like this to cache media content and reduce their overall data moving through the constellation. It could even be worth it to have some satellites in higher orbits (even GEO if the ground hw can reach it) dedicated to streaming media content. You can tolerate higher RTT for content that doesn't need to be real time.
Or you could use fibre, which has the advantage of not needing to use > 1kw of concentrated microwave to get ~2gig of throughput
Or even better not yeeting it into an environment where its cooked/cooled every 90 minutes
Or even better where its not absolutely pelted by cosmic rays enough to obliterate a good GB a day of data.
Or space data centre.
no, absolutely not. orbital datacenters are never going to happen, it doesn't matter whether you try to frame them as compute or storage or whatever else.
the extreme density of these SSDs is actually an anti-feature in the context of spacecraft hardware.
the RAD750 CPU [0] for example uses a 150nm process node. its successor the RAD5500 [1] is down to 45nm. that's an order of magnitude larger than chips currently made for terrestrial uses.
radiation-hardening involves a lot of things, but in general the more tightly packed the transistors are, the more susceptible the chip is to damage. sending these SSDs to space would be an absurd waste of money because of how quickly they would degrade.
and then there's the power consumption & heat dissipation. one of these drives draws 25W [2] and Dell is bragging about cramming 40 of them into one server. that's a full kilowatt of power - essentially a space heater in a 2U form factor.
0: https://en.wikipedia.org/wiki/RAD750
1: https://en.wikipedia.org/wiki/RAD5500
2: https://americas.kioxia.com/content/dam/kioxia/en-us/busines...
If I correctly understand what you're suggesting, then that could save on uplink bandwidth. Sending one copy into space, and then sending it back down over and over again sounds nice.
But does it solve a problem that we actually have? Is uplink bandwidth a pressing limitation?
Tell me about the thermals.
Max per drive is 25 W, so even a rack with 20 servers of 40 drives each is probably less than the average GPU rack even after the other overheads.
Some wealthy techbro from /r/datahoarders is going to purchase this to store all episodes of Doctor Who in uncompressed 10-bit 4:2:2 FFV1 Matroska remuxes with redundant PAR2 recovery archives.
Not quite yet.
The interesting thing here is ~256TB in a single drive, but it's in E3.L form factor.
I have about 160TB on hard drives that I'm waiting to offload onto a single SSD.
But that needs to come with a connector that has adapters to USB-C, so I can attach it to my Macbook Neo.
Hopefully they get it a bit more dense soon and into the 2.5" NVMe form.
I've been waiting with bated breath for a SATA 3.5" SSD with high capacity.
I might be waiting forever, because clearly there's nothing coming. Though I'm not sure if it's because it's technically difficult (high power consumption to keep the flash lit?) or something else.
I'm aware that it leaves performance on the table for the chips, and probably that means that unit economics means that for the yeild: OEMs would rather make high performance drives which sell for more.
But a 4-bay NAS with 3.5" SSD's would be silent and theoretically sip power, and there so much space for chips, you could space them nicely and get 10+TiB in a drive...
I don't need to touch every cell, I just want something silent and stateless and less power intensive for my time-capsule backups and linux ISOs.
Alas.
There's a ton of different adapters already between edsff connector used for e3 / e2 / e1 drives and everything else pcie already (pcie, m.2, u.2). For example this pcie card. (Good luck tweaking your equalizer settings jumpers by hand though, whew!!) https://www.microsatacables.com/pcie-x8-gen4-with-redriver-t...
Drop that in one of the many usb4 to pcie docks and you should be good to go. Pretty fugly but it ought to just work! I think there's some cheaper models that are under $90 still available, but here's a listing. https://www.dfrobot.com/product-2835.html
I believe a more focused dedicated usb<->NVMe chip might also work, if attached to an edsff connector. I didnt look hard, but I haven't seen any such products yet, but: it's mostly mechanical/packaging, some signal integrity checks, but generally wouldn't really be much different in the end than a NVMe adapter. Seems very doable.
Build it! Someone could sell (to quote a Daily Show) literally dozens of said adapter! (Eventually probably many many more, but not a huge second hand market for edsff atm).
It needs to be portable/travel-friendly, something like this: https://global.icydock.com/product_327.html
Full NICs takes about 666 minutes to fill this thing.
Satan’s NAS!
Remember that season of Silicon Valley on HBO that was all about “the box”?
I feel like we’re in that season.
Can't wait to move my spinning rust NAS to this in 20 years.
I've been wanting to update my (100TB) NAS for over five years, but I haven't yet found anything that I feel is worth upgrading to. One of these with a QSFP56 interface would be nice, but I would need to sell one of my houses to pay for it, so I'll be waiting a little longer...
Sadly none of that enterprise hardware will ever make it to you due to being wastefully shredded
I went to QLC for my NAS last cycle. The $/TB was worse, but not by a huge margin, and the performance is quite a bit better (not that it matters).
NVME SSDs are consumable items more so than HDDs are.
These drives will arrive in the secondary market to be snapped up by businesses lower in the food chain. By the time you can find them they will be ridden hard and put away wet that you probably wont want them.
What would this cost?
They are likely 200USD+ per TB, so one 250TB drive would be ~50,000USD.
There’s probably bulk pricing, but if you bought 40 drives separately thats 2,000,000USD in storage alone.
I can't remember where I saw it, but I think each of these high capacity drives is in well into the 15-25k price range.
So a petabyte will be $600-800k alone, plus a server with enough high-speed PCIe lanes to serve the 40+ drives, definitely $1m+
You can't buy this stuff anymore. They are leased and rented through layers of middlemen.