Testing two 18 TB white label SATA hard drives from datablocks.dev
ounapuu.ee208 points by thomasjb 7 days ago
208 points by thomasjb 7 days ago
https://web.archive.org/web/20251006052340/https://ounapuu.e...
Reading the part about using foam to make these drives quieter, and the link to the author's other article about putting drives on foam, makes me write this obligatory warning: hard drives do not like non-rigid mounting. Yes, the servo can usually still position the heads on the right track (since it's a servo), but power dissipation will be higher, performance will be lower, and you may get more errors with a non-rigid mount. Around 20 years ago it was a short-lived fad in the silent-PC community to suspend drives on rubber bands, and many of those who did that experienced unusually short drive lifetimes and very high seek error rates. Elasticity is the worst, since it causes the actuator arm to oscillate. The ideal mount is as rigid as possible. Meanwhile: https://www.youtube.com/watch?v=tDacjrSCeq4 I guessed what that would be before clicking. Yes, HDDs are extremely sensitive to any vibration. As someone with a bit of experience on this topic: HDDs doesn't like micromovements. If you put it on a pink foam mat (both a computer and yoga ones) it wouldn't matter. If you 'rigid mount' it but your screws would came lose then your HDD wouldn't like it because it wo&ld result in microvibrations from the self induced oscillations. Rubber washers are good because they eat those microvibrations. The hard foam which is talked about in the linked article is not good because it is bad from the all aspects - too hard to eat up microvibrations, too soft to be a rigid mount. The worst thing you can do is to rigid mount an HDD to a case which is a subject to a constant vibration load eg from a heavy duty fan or some engine. Thanks, very interesting. TIL. I've been mounting my 3.5" hard drives on those "fad" rubber band 5.25" drive bay adapters for decades and have not noticed any increased failure rate at all. Sure, seek time may be worse, but the reduced noise has been worth it for me. The problem isn't just slower seeks; it's when vibration causes the head to go off-track and write data where it shouldn't, faster than the servo can correct. Track pitch in modern hard drives is only a few dozen nanometers. I think OP is talking about something quite different. Can you give a pic or link on what you are using? Yes. If you need this you are far better off buying SSDs than wasting time on these silly ideas. How much would 18TB of SSDs cost compared to 18TB of HDDs? Probably a big reason why many go for HDDs still today. SSDs are still roughly 3x per $/Tb. You can get a 8Tb QVO SATA drive for a like ~$300 so... ~$40/Tb I'm pretty sure whatever that community experienced is more anecdotal that statistically provable... Hello, author here! It's a nice surprise to notice my own post here, but the timing is unfortunate as I'm shuffling things around on my home server and will accidentally/intentionally take it offline for a bit. Here's a Wayback Machine copy of the page when that does happen: https://web.archive.org/web/20251006052340/https://ounapuu.e... Have you considered 2nd hard enterprise SSDs? Sometimes larger sized models of those (15TB+) can be found with very good pricing. :) I was about to buy a NAS. I find the idea of using an old laptop instead interesting. Especially since it comes with UPS built in. The author is using a ThinkPad T430. Any experiences? The official TrueNAS docs recommend against using USB drives [1]. My understanding is that between the USB controller, flaky connectors and cables, and usb-to-sata bridges of varying quality, there are just too many unknowns to guarantee a reliable experience. For example, I’ve heard that some usb-to-sata controllers will drop commands and not report SMART data. That said, there are of course many people on the internet who have thrown caution to the wind and report that it’s working fine for them. Personally I’m in the process of building a NAS with an old 9th gen Intel i5. Many mobos support 6 SATA ports and three mirrored 20 TB pairs is enough storage for me. I’m guessing it’ll be a bit more power hungry than a ugreen/synology/etc appliance but there will also be plenty of headroom for running other services. [1] https://www.truenas.com/docs/core/13.0/gettingstarted/coreha... These shucked USB adapters from WD Elements external drives are pretty reliable, from my experience. They kinda have to be, since otherwise it would affect the reputation of WD's external drives as a whole. Obviously, direct SATA is still better if possible, but if not, these are probably the next best thing. Those pesky WD bridges usually support USB Bulk Storage only but not UASP, resulting in worse performance and higher CPU usage. Also HDD power management is often complicated by the bridge chip sometimes intervening. Not recommended for long-term use. Been using like 7 external usb drives with 40-50tb total for a few years with no issues. Not raid, just backing up drive to drive. No controller or drive issues. Mix of seagate and wd 8/12/16gb. I hate blanket recommendations like this by docs. To me, it just sounds like some guy had a problem a few times and now it's canon. It's like saying "avoid Seagate because their 3tb drives sucked." Well they did, but now they seem to be fine. What may work anecdotally can't necessarily be used for official recommendations for a large range of users across an unknown range of hardware configurations. If it works for you, that's fine. That isn't sufficient to make a general statement that everybody will be fine using external USB drives, particularly for RAID, especially when people will then make you responsible if something goes wrong for not making sufficiently safe recommendations. You understand that, right? RAID is much different. You can try it over USB, you won't have a good time. TrueNAS is primarily talking about RAID users. Yes I should have specified that this advice is specific to RAID configurations in NAS applications. If you're occasionally copying data to an external USB drive, that's totally fine. That's what they were designed for. The issue is that they were not designed for continuous use, or much more demanding applications like rebuilding/resilvering a drive. It's during these applications that issues occur, which is a double whammy, because it can cause permanent data loss if your USB drive fails during a recovery operation. I did a little more research after posting my last comment and came across this helpful post on the TrueNAS forums going into more depth: https://forums.truenas.com/t/why-you-should-avoid-usb-attach... YMMV. I have a 4-drive 20TB mdraid10 across two different $50 USB3.0 2-drive enclosures, I've read petabytes off this array with years of uptime and absolutely zero problems. And it runs on one of those $300 off brand NUCs. The 2.5G NIC is the bottleneck on reads. Is that with ZFS or something else? Mainly I wouldn't do it because of there's space and SATA ports it seems stupid. Hotter. Worse HW. Can't really see much good reason to do it tbh except it's in a small hot case which is relatively easy to move around. Maybe if you do occasionally backups and you don't care about scrubbing and redundancy? Otherwise why not shuck them and throw them in a case? I own this and it's worth it's weight in gold
https://www.supermicro.com/en/products/motherboard/A2SDi-H-T... Yes. It's pricey but it's never been a problem. It can connect like 12 HDDs with 256GB ram and has 10GBe and runs at a tiny TDP. Has IPMI. Fits in a tiny case. The only issue I had with this motherboard was that it was difficult to find someone who sold it. Love it Also I don't see the built-in UPS. The external drives still use external power The laptop batteries tend to go bad(either just stop working or expand and become a major fire hazard) after a year or two as they are not built to be fully charged for years on end. I tried doing it twice and that is what happened both times. Would not recommend; if you want a UPS just buy one, the small ones are not that expensive, like 70 USD. On Thinkpads tlp(8) can set a maximum battery charge threshold of 80%. The embedded controller takes care of it. Never had problems. Makes batteries live way longer. On my ThinkPad T430, I have a weekly full discharge cycle set up using "tlp recalibrate BAT0", it helps avoid that issue and helps confirm that the battery is still functional. I don't use a laptop, but I use something fairly adjacent: the Beelink SER6 (https://www.amazon.com/Beelink-4-75GHz-PCIe4-0-Supports-HDMI...), which is basically a gaming laptop converted into a small desktop. For the most part, it has actually been pretty great. It's quiet, has a CPU that is much better than I expected, and a decent enough GPU to do hardware transcoding for Jellyfin without much issue. I use USB chassis of hard drives to work as the "NAS" part, and it works fairly well, and this box is also my router (using a 10 GbE thunderbolt adapter) though my biggest issue comes with large updates in NixOS. For reasons that are still not completely clear to me, when I do a very large system update (rebuilding Triton-llvm for Immich seems to really do it), the internal network will consistently cut out until I reboot the machine. I can log in through the external interface with Tailscale and my phone, so the machine itself is fine, but for whatever reason the internal network will die. And that's kind of the price you pay for using a non-server to do server work. It will generally work pretty well, but I find that it does require a bit more babysitting than a rack mount server did. There’s also variants of three mini PCs with hard drive bays. I recently bought an Aostar WTR Pro and I’d considered the Ugreen competitor. Yeah, though I have 24 drives so I think by definition I couldn't really have a "mini" with enough bays to handle that. If you don’t need any performance it’s a great backup strategy. If your only way of connecting the drives to the laptop is USB I would be concerned about data integrity if it’s important data. Why is USB so bad at data integrity. Doesn't it have error detection/correction? If so, that sounds like a huge design flaw. Individual writes are safe, in my
Experience with thousands of
uSB drives in many configurations, some with 12 2tb drives hanging on multiple USB hubs at the same time. However, there are disconnects/reconnects every now and then. If you use a standard raid over these usb drives, almost every disconnect/reconnect will trigger a rebuild — and rebuilds take many hours. If you are unlucky enough to have multiple disconnects during a rebuild, you are in trouble. I ran an old Thinkpad as a home router and small home server/NAS device for quite a long time, usually swapping out my old work upgrades every 3 years or so. They all had onboard gige so it worked fine - native vlan for the inbound Comcast connection, tagged vlans out to a switch for the various LAN connections. They were from the era of DVD drives so I was able to put an extra HDD in the DVD slot to expand storage with. One model even had a eSATA port. They worked great. Built-in UPS and they come with a reliable crash cart built-in! For me it was important to have ECC RAM and laptops pretty much never have that. My personal recommendation is an old IBM/Lenovo workstation tower as the base. I bought one for $35 on eBay and added $40 of RAM (32GB). A $10 UPS from Goodwill with a $25 battery from Amazon, and whatever hard drives you want. I run Ubuntu and ZFS on it but next time would probably opt for FreeBSD for a nicer OS. i had a m.2 to pci-e for a sata controller. worked fine but the ups thing is a bit non workable as the drives are not powered by the laptop Used a Lenovo X220T with a cracked screen and missing keyboard a few years back. Worked like a champ (as a server). Cooling was much better without the keyboard. I see a lot of people using M710 mini desktops - I think you can pop a pcie 10gbe card in and a m.2 SATA card and 3d print a disk stand? You can. It works fine if you know the limitations. An important one is, drives could disconnect, so traditional RAID wouldn't be good. If you want redundancy, look at something like SnapRAID, http://www.snapraid.it If you want to combine into a single volume, consider rclone. These remotes specifically are the ones I'm thinking could be useful, Good luck o7 > I was about to buy a NAS. The UNAS Pro 8 just came out and I'm thinking about getting it, switching away from my aging Synology setup ... only thing I wish it had was a UPS server as my Synology currently serves that purpose to trigger other machines to shut down ... I believe Synology's UPS monitoring is based on nut-server[1]. In my setup, I am running the server on a separate machine that reads UPS state over USB and Synology is just a client. Maybe UNAS could also just work as a client. I've been considering "de-enterprising" my home storage stack to save power and noise and gain something a bit more modular. Currently I'm running on an old NAS 1U machine that I bought on eBay for about $300, with a raidz2 of 12x 18TB drives. I have yet to find a good way to get nearly that much storage without going enterprise or spending an absolute fortune. I'm always interested in these DIY NAS builds, but they also feel just an order of magnitude too small to me. How do you store ~100 TB of content with room to grow without a wide NAS? Archiving rarely used stuff out to individual pairs of disks could work, as could running some kind of cluster FS on cheap nodes (tinyminimicro, raspberry pi, framework laptop, etc) with 2 or 4x disks each off USB controllers. So far none of this seems to solve the problem that is solved quite elegantly by the 1U enterprise box... if only you don't look at the power bill. > How do you store ~100 TB of content with room to grow without a wide NAS? In the cloud (S3) or on offline (unpowered HDDs or tapes or optical media) I suppose. Most people just don't store that much content. > So far none of this seems to solve the problem that is solved quite elegantly by the 1U enterprise box... if only you don't look at the power bill. What kind of power bill are you talking about? I'd expect the drives to be about 10W each steady state (more when spinning up), so 180W. I'd expect a lower-power motherboard/CPU running near idle to be another 40W (or less). If you have a 90% efficient PSU, then maybe 250W in total. If you're way more than that, you can probably swap out the old enterprisey motherboard/RAM/CPU/PSU for something more modern and do a lot better. Maybe in the same case. I'm learning 1U is pretty unpleasant though. E.g. I tried an ASRock B650M-HDV/M.2 in a Supermicro CSE-813M. A standard IO panel is higher than 1U. If I remove the IO panel, the motherboard does fit...but the VRM heatsink also was high enough that the top case bows a bit when I put it on. I guess you can get smaller third party VRM heat sinks, but that's another thing to deal with. The CPU cooler options are limited (the Dynatron A42 works, but it's loud when the CPU draws a lot of power). 40mm case fans are also quite loud to move the required airflow. You can buy noctuas or whatever, but they won't really keep it cool. The ones that actually do spin very fast and so are very loud. You must have noticed this too, although maybe you have a spot for the machine where you don't hear the noise all the time. I'm trying 2U now. I bought and am currently setting up an Innovision AS252-A06 chassis: 8 3.5" hot swap bays, 2U, 520mm depth. (Of course you can have a lot more drives if you go to 2.5" drives, give up hot swap, and/or have room for a deeper chassis.) Less worry about if stuff will fit, more room for airflow without noise. 2U is definitely better, but I didn’t notice significant drops in dB till I could stuff a 120mm fan in the case. That requires a 3U or more. And if you need a good fan that’s quiet enough for the CPU, you’re looking at 4U. Otherwise, you’ll need AIOs hooked up to the aforementioned 120s. > And if you need a good fan that’s quiet enough for the CPU, you’re looking at 4U. Depends on the CPU, I imagine. I'm using one with a 65W TDP. I'm hopeful that I can cool that quietly with air in 2U, without having to nerf it with lower BIOS settings. Many NASs have even lower power CPUs like the Intel N97 and friends. Oh yes, you can definitely get away with much less for something like that or an ARM, Ryzen embedded chips, etc. The 4U is more for full scale desktop CPUs like the i9-12900k I am running (like an NH-D15 sink/fan). You may even be able to get away with passive cooling at the 65W range. There's a bit of a trend of vendors packaging mobile CPUs in desktop form factor which are a good candidate for this. Rather than the prebuilt mini PCs this also includes mini-ITX boards. Personally I use the Minisforum BD795i SE, but there are others too. Check for PCIe bifurcation support. If that's there you can pop in a PCIe to quad M.2 adapter. That will split a PCIe x16 slot into 4 x M.2s. Each of those (and the M.2s already on the motherboard) can then be loaded with either an NVMe drive or an M.2 to SATA adapter, with each adapter providing 6 x SATA ports. That setup gives a lot of flexibility to build out a fairly extensive storage array with both NVMe and spinning platters and no USB in sight. As a nice side effect of the honestly bonkers amount of compute in those boards there's also plenty of capacity to run other VM workloads on the same metal which lets a lot of the storage access happen locally rather than over the network. For me, that means the on-board 2.5GbE NIC is more than fine, but if not you can also load a M.2 to 10GbE adapter(s) as needed. This sounds like a really nice setup. Which M.2 to SATA adapters are you using? I've heard some of those are dodgy and others are alright. I've not used any of them, but from my shopping some of them are multiport SATA adapters, and some of them are a single port SATA adapter plus a SATA port multiplier. I would expect the port multiplier variants to be dodgier. I have to imagine that the best NAS build is simply a 6-core or 8-core standard AMD or Intel with a few HBA controllers and maybe 10Gbit SPF+ fiber or something. "Old server hardware" for $300 is a bit of a variation, in that you're just buying something from 5 years ago so that its cheaper. But if you want to improve power-efficiency, buy a CPU from today rather than an old one. -------- IIRC, the "5 year old used market" for servers is particularly good because many datacenters and companies opt for a ~5-year upgrade cycle. That means 5-year-old equipment is always being sold off at incredible rates. Any 5-year-old server will obviously have all the features you need for a NAS (likely excellent connectivity, expandibility, BMS, physical space, etc. etc.). Just you have to put up with power-efficiency specs of 5 years ago. For AMD Zen, they have power consumption overhead on all chiplet designs, even if the chip only has one core complex, the separate IO die makes it hard to get idle power consumption under 30W. Usually the chips with explicitly integrated GPUs (G-suffix, or laptop chips) are monolithic and can hit 10W or lower. Dell R500 series is very good for dense storage at low costs if you lean to SATA or NL-SAS If you want instant access to any bit of the 100 TB content, you need a wide NAS. Otherwise, you can have a couple of HDD racks in which you can insert HDDs when needed (SATA allows live insertion and extraction, like USB). Then you have an unlimited amount of offline storage, which can be accessed in a minute by swapping HDDs. You can keep an index of all files stored offline on the SSD of your PC, for easy searches without access to HDDs. The index should have all relevant metadata, including content hashes, for file integrity verification and for duplicate files identification. Having 2 HDD racks instead of just 1 allows direct copies between HDDs and doubles the capacity accessible without swapping HDDs. Adding more than 2 adds little benefit. Moreover, some otherwise suitable MBs have only 2 SATA connectors. Or else you can use an LTO drive, which is a very steep initial investment, but its cost is recovered after a few hundred TB by the much cheaper magnetic tapes. Tapes have a worse access time, of the order of one minute after tape insertion, but they have much higher sequential transfer speeds than cheap SATA HDDs. Thus for retrieving big archive files or movies they save time. Transfers from magnetic tape must be done either directly to an NVME SSD or to an NVME SSD through Ethernet of 10 Gb/s or faster, otherwise their intrinsic transfer speed will not be reached. Nah buy the right enterprise gear instead https://www.supermicro.com/en/products/motherboard/A2SDi-H-T... I'd really dig a version of this with a Ryzen AI chip and 128gb of ram. I'm moving to Lenovo tiny m75q series for now due to low idle power and heat generated. If you want 100TB, you need a bigger NAS than most, and that makes most of the DIY NAS not so good. 2-4 drives seems to be where DIY shines. These days motherboards often stop at 4x sata, so you'll need a HBA or USB (eww). Personally, I just don't have that much data, 24TB mirrored for important data is probably enough, and I have my old mirror set avaialable for media like recorded tv and maybe dvds and blu-rays if I can figure out a way to play them that I like better than just putting the discs in the machine. We run 48TB (after redundancy, 3 striped mirrors) over a USB enclosure (TerraMaster D6-320) and it's honestly not as bad as people say. The only failure this system experienced in the past few years was due to noisy power causing a reset, and the ZFS root (not the data pool) becoming read only due to a write hole caused by a consumer NVMe (Crucial P3 Plus) lying about being synced (who could've expected that). check out the Jonsbo N5 NAS case, you can toss 12 drives and a low power mITX motherboard (see sibling comments) in it for a cheap-ish neat-ish box with a not-proprietary upgrade path Uhh could you provide a hook for such a deal? I've been starving for more storage and can now handle a rack mounted system but have been avoiding dropping $1000 on a pair of new hard drives. I just missed an ebay opportunity to get a dell r730xd with 12x 12tb drives for around 400 dollars. if you're willing to wait and bid-snipe you can find deals like that routinely; just wait to find one with the size drives you want. if you just need the drives similar lot sales are available for high power-on time zero errors enterprise drives. I bought a lot of 6x 6tb drives two weeks ago for 120 usd and they all worked fine. If you have the bay space and a software solutuon that lets you swap them in and out as needed without distorting data then there is a lot of 'hobby fun' to be had with managing a storage rack. I have a a case with several 3.5" bays and a truenas server happily running. I've been running an all-flash array because I had a bright-eyed vision of the future. At this point a very cheap pile of unreliable spinning rust is exactly what I need. Thanks for the tips. A fortune? I'm getting 14tb SAS drives "recertified" on ebay for 150usd. Substantially less than most other sources of hard drives. Depending on your drive enclosure it should also be able to power down drives that aren't actively being used. Recertified/used enterprise equipment is the only way to affordably host 100s of terabytes at home. The reduction in warranty from 5 years to 1 when buying these doesn't weigh up to the quite limited reduction in price. This would only cover failures during the first few months of runtime, and while most drive-failures will be in the beginning, or after 5+ years, i've seen enough drives die in year 2-5 to prefer some warranty cover, especially on the $200 drives. I admire the courage to store data on refurbished Seagate hard drives. I prefer SSD storage with some backups using cloud cold storage, because I’m not the one replacing the failing hard drives. I would also prefer having a large number of high capacity SSDs so I could replace my spinning hard drives. But even the cheapest high capacity SSD deals are still a lot more expensive than hard drive array. I’ll continue replacing failing hard drives for a few more years. For me that has meant zero replacements over a decade, though I planned for a 5% annual failure rate and have a spare drive in the case ready to go. I could replace a failed drive from the array in the time takes to shut down, swap a cable to the spare drive, and boot up again. SSDs also need to be examined for power loss protection. The results with consumer drives are mixed and it’s hard to find good info about how common drives behave. Getting enterprise grade drives with guaranteed PLP from large on-onboard capacitors is ideal, but those are expensive. Spinning hard drives have the benefit of using their rotational inertia to power the drive long enough to finish outstanding writes. This is going to be a huge anecdote but all the consumer SSD I've had has been dramatically less reliable than HDDs. I've gone through dozens of little SATA and M2 drives and almost every single one of them has failed when put into any kind of server workload. However most of the HDDs I have from the last 10 years are still going strong despite sitting in my NAS and spinning that entire time. After going deep on the spec sheets and realizing that all but the best consumer drives have miserably low DWPD numbers I switched to enterprise (U.2 style) two years ago. I slam them with logs, metrics data, backups, frequent writes and data transfers, and have had 0 failures. What file system are you using? ZFS is written with rotation rust in mind and assumingely will kill non-enterprise ssd. Curious, what's the use case for wanting your data backed-up without fail? Is it personal archives or otherwise (business) archive related? Not to say you shouldn't backup your data, but personally I wouldn't be to affected if one of my personal drives errored out, especially if they contained unused personal files from 10+ years ago (legal/tax/financials are another matter). Any data I created, paid to license, or put in significant work to gather has to be backed-up with 3-2-1 rule. Stuff I can download or otherwise obtain again is best effort but not mandatory backup. Mainly I don't want to lose anything that took work to make or get. Personal photos, videos, source code, documents, and correspondence are the highest priority. You can find cheap used enterprise SSDs on ebay. But the problem is that even the most power efficient enterprise SSD (SATA) idle at like 1w. And given the smaller capacities, you need many more to match a hard drive. In the end HDD might actually consume less power than an all flash array + controllers if you need a large capacity. Used SSDs, especially enterprise ones, are a really bad idea unless you get some really old SLC parts. Flash wears out in a very obvious way that HDDs don't, and keep in mind that enterprise-rated SSDs are deliberately rated to sacrifice retention for endurance. Agree on SSD for cold storage, that's not a good idea. But you would be surprised by how little used are typical used enterprise SSDs on ebay. This article matches my experience: https://www.servethehome.com/we-bought-1347-used-data-center... I bought over 200 over the last year, and the average wear level was 96%, and 95% had a wear above 88%. Endurance and retention are inversely correlated, and as I mentioned in my original comment, enterprise DC drives are designed to advertise the former at the expense of the latter. The industry standard used to be 5 years retention for consumer and 3 months for enterprise, after reaching the specified TBW. The wear level SMART counter reflects that; "96% remaining" on an enterprise drive may be 40% or less on a consumer one having written the same amount, since the latter is specified to hold the data for longer once its rating has been reached. Retention is offline retention. Not online. So not sure what point you are trying to make. If it is that SSDs shouldn't be used for cold storage, yeah I agree, and enterprise SSds aren't designed for cold storage. But you seem to be linking retention to TBW, which are largely orthogonal metrics. If you are going to use the SSDs in a NAS, which by definition are running all the time, why would you even care about the rentention rating? RAID. Preferably RAID 6. Much, much better to build a system to survive failure than to prevent failure. Don't RAID these days. Software won rather drastically, likely because CPUs are finally powerful enough to run all those calculations without much of a hassle. Software solutions like Windows Storage Spaces, ZFS, XFS, unRAID, etc. etc are "just better" than traditional RAID. Yes, focus on 2x parity drive solutions, such as ZFS's "raidz2", or other such "equivalent to RAID6" systems. But just focus on software solutions that more easily allow you to move hard drives around without tying them to motherboard-slots or other such hardware issues. > Don't RAID these days. Software won rather drastically RAID does not mean or imply hardware RAID controllers, which you seem to incorrectly assume. Software RAID is still 100% RAID. And 'softRAID', like what is on for free on Intel motherboards or AMD Motherboards suck and should be avoided. ------ The best advice I can give is to use a real solution like ZFS, Storage Spaces and the like. It's not sufficient to say 'Use RAID' because within the Venn Diagram of things falling under RAID is a whole bunch of shit solutions and awful experiences. I haven't seen a machine shipped with firmware RAID in decades. It's still enabled in the firmware of some vendors' laptops -- ones deep in Microsoft's pockets, like Dell, who personally I would not touch unless the kit were free, but gullible IT managers buy the things. My personal suspicion is that it's an anti-Linux measure. It's hard to convert such a machine to AHCI mode without reformatting unless you have more clue than the sort of person who buys Dell kit. In real life it's easy: set Windows to start in Safe Mode, reboot, go into the firmware, change RAID mode to AHCI, reboot, exit Safe Mode. Result, Windows detects a new disk controller and boots normally, and now, all you need to do is disable Bitlocker and you can dual-boot happily. However that's more depth of knowledge than I've met in a Windows techie in a decade, too. FYI XFS is not redundant, also RAID usually refers to software RAID these days. I like btrfs for this purpose since it's extremely easy to setup over cli, but any of the other options mentioned will work. btrfs RAID is quite infamous for eating your data. Has it been fixed recently? To be fair, your statement could be edited as follows to increase its accuracy: > btrfs is quite infamous for eating your data. This is the reason for the slogan on the bcachefs website: "The COW filesystem for Linux that won't eat your data". After over a decade of in-kernel development, Btrfs still can't either give an accurate answer to `df -h`, or repair a damaged volume. Because it can't tell a program how much space is free, it's trivially easy to fill a volume. In my personal experience, writing to a full volume corrupts it irretrievably 100% of the time, and then it cannot be repaired. IMHO this is entirely unacceptable in an allegedly enterprise-ready filesystem. The fact that its RAID is even more unstable merely seals the deal. > Btrfs still can't either give an accurate answer to `df -h`, or repair a damaged volume. > In my personal experience, writing to a full volume corrupts it irretrievably 100% of the time, and then it cannot be repaired. While I get the frustration, I think you could have probably resolved both of them by reading the manual. Btrfs separates metadata & regular data, meaning if you create a lot of small files your filesystem may be 'full' while still having data available; `btrfs f df -h <path>` would give you the break down. Since everything is journaled & CoW it will disallow most actions to prevent actual damage. If you run into this you can recover by adding an additional disk for metadata (can just be a loopback image), rebalancing, and then taking steps to resolve the root cause, finally removing the additional disk. May seem daunting but it's actually only about 6 commands. No. RAID 5/6 is still fundamentally broken and probably won't get fixed This is incorrect, quoting Linux 6.7 release (Jan 2024): "This release introduces the [Btrfs] RAID stripe tree, a new tree for logical file extent mapping where the physical mapping may not match on multiple devices. This is now used in zoned mode to implement RAID0/RAID1* profiles, but can be used in non-zoned mode as well. The support for RAID56 is in development and will eventually fix the problems with the current implementation." I've not kept with more recent releases but there has been progress on the issue And I prefer to have a healthy bank account balance. Storing 18TB (let alone with raid) on SSDs is something only those earning Silicon Valley tech wages can afford. We bought a few Kioxia 30.72 TiB SSDs for a couple of thousand in a liquidation sale. Sadly, I don't work there any more or I could have looked it up. U.2 drives if I recall, so you do need either a PCIe card or the appropriate stuff on your motherboard but pretty damn nice drives. Not really. I know that my sleep is worth more than the difference between HDD and SSD prices, and I know the difference between the failure rates and the headache caused by the RMA process, so I buy SSDs. In essence, what we together are saying is that people with super-sensitive sleep that are also easily upset, and that don't have ultra-high salaries, cannot really afford 18 TB of data (even though they can afford an HDD), and that's true. Well, again, well done on being able to afford it. I have 24TB array on cheap second hand drives from CEX for about £100 each, using DrivePool - and guess what, if one of them dies I'll just buy another £100 second hand drive. But also guess what - in the 6 years I had this setup, all of these are still in good condition. Paying for SSDs upfront would have been a gigantic financial mistake(imho). Every drive is "used" the moment you turn it on. There's a big difference between used as in I just bought this hard drive and have used it for a week in my home server, and used as in refurbished drive after years of hard labor in someone else's server farm Enterprise drives are way different than anything consumer based. I wouldn't trust a consumer drive used for 2 years, but a true enteprise drive has like millions of hours left of it's life. Quote from Toshiba's paper on this. [1] Hard disk drives for enterprise server and storage usage (Enterprise Performance and Enterprise Capacity Drives) have MTTF of up
to 2 million hours, at 5 years warranty, 24/7 operation. Operational temperature range is limited, as the temperature in datacenters
is carefully controlled. These drives are rated for a workload of 550TB/year, which translates into a continuous data transfer rate of
17.5 Mbyte/s[3]. In contrast, desktop HDDs are designed for lower workloads and are not rated or qualified for 24/7 continuous
operation. From Synology With support for 550 TB/year workloads1 and rated for a 2.5 million hours mean time to failure (MTTF), HAS5300 SAS drives are built to deliver consistent and class-leading performance in the most intense environments. Persistent write cache technology further helps ensure data integrity for your mission-critical applications. [1]
https://toshiba.semicon-storage.com/content/dam/toshiba-ss-v... [2]
https://www.synology.com/en-us/company/news/article/HAS5300/... Take a look at backblaze data stats. Consumer drives are just as durable, if not more so than enterprise drives. The biggest thing you're getting with enterprise drives is a longer warranty. If you're buying them from the second hand market, you don't likely get the warranty (and is likely why they're on the second hand market) There isn’t a significant difference between “enterprise” and “consumer” in terms of fundamental characteristics. They have different firmware and warranties, usually disks are tested more methodically. Max operating range is ~60C for spinning disks and ~70C for SSD. Optimal is <40-45C. The larger agents facilties afaik tend to run as hot as they can. > drive has like millions of hours left of it's life. It doesn't apply for the single drive, only for a large number of drives. E.g. if you have 100000 drives (2.4 million hours MTTF) in a server building with the required environmental conditions and maximum workload, be prepared to replace a drive once a day in average. datablocks.dev has a page explaining what white label and recertified disks are [1]. Those are not disks used for years under heavy load. 1: https://datablocks.dev/blogs/news/white-label-vs-recertified... Drive failure rate versus age is a U-shaped curve. I wouldn't distrust a used drive with healthy performance and SMART parameters. And you should use some form of redundancy/backups anyway. It's also a good idea to not use all disks from the same batch to avoid correlated failures. anecdote: I've had very bad experience with these OS white label drives, even when marked as new. I've had much better luck shucking USB drives. 4+ years ago I bought 20 "new" (can't validate), "seagate manufactured" (can't validate) "OS" SAS drives, and 2 started throwing errors in truenas quickly (sadly after I had the ability to return them). Had another 20 WD and Segate drives I shucked at the same time (was going into 3 12x SAS/SATA machines and 1 4x SATA NAS). The NAS got sidelined as had to use the SATA drives were meant for and no longer trusted the SAS drives so wanted to keep the 2 extra drives as backup. Which was a good idea, as over the next 4 years another 2 of SAS drives started throwing similar errors. so 20% of the white label drives didn't really last, while 100% of the shucked drives have. What was even worse, the firmware on the "OS" drives was crap, while it "technically" had smart data, it didn't provide any stats, just passed/not passed. (main lesson learned from this, don't accept Another anecdote: For a long time I wasn't sure what to do with the SAS drives as in the past I used unused drives for this for cold offline storage, but SAS docks were very expensive ($200+). Recently it seems they have come down in price to under $50 so I bought and was able to fill the drives up (albeit very slowly, it seems they did have problems (was only getting 10-20MB/s), but at least I was able to validate their contents a few times after that, a bit less slow (80MB/s). Aside: 3 weeks ago I had multiple power outages that I thought created problems in one of the shucked drives (was getting uncorrectable reads, though ZFS handled it ok) and a smart long test show pending sectors. But after force writing all the pending sectors with hdparm, none of the sectors were reallocated. I now think it just had bad partial writes when the power outage hit, so the sectors literally had bad data as the error correcting code didn't match up, also explains why they were all in blocks of 8), and multiple smart long tests later and "fingers crossed", everything seems fine. hello, thanks for the great article!! 2 remarks from my side: * some smartctl -a ... output would have been nice ~ i don't care if it is from "when the drives where shipped" or from any later point in time * prices are somewhat ... aehm ... lets call them "uncompetitive" at least for where i'm at (austria, central-europe, eu) i compared prices normalized by cost pro TB with new (!) drives from the austrian price-portal "geizhals" for example: for 3,5 inches HDDs sorted by "price / TB" * https://geizhals.at/?cat=hde7s&xf=5704_3.5%22~5717_SATA%203G... sometimes the prices are slightly higher for the used (!) drives ... sometimes also a bit lower, but imho (!) not enough to justify buying refurbished drives over new (!) ones ... just my 0.02€ What exactly are these "white label drives"? Aren't these just normal seagate exos drives with SMART information wiped and labels removed? i.e. just a worse used drive. The "OS" on the drive stands for "off-spec". As far as I understand, here's where they come from: 1. A large company (think cloud storage provider or something) wanting to build out storage infrastructure buys a large amount of drives from Seagate. 2. When the company receives the drives from Seagate, they randomly sample from the lot to make sure the drives are fully functional and meet specifications. 3. The company identifies issues from the sampled drives. These can range from dents/dings in the casing or torn labels to firmware or reliability issues. 4. The company returns the entire lot to Seagate as defective. Seagate now doesn't want anything to do with these drives, so they relabel them as "OS" with no Seagate branding and sell them as-is at a discount to drive resellers. 5. The drive resellers may or may not do further testing on the drives (you can probably tell by how much of a warranty a given reseller offers) before selling them onto people wanting cheap storage. Apparently Seagate drives that weren't good enough to have their own name on them... which given the history of even their branded drives, is something I'd only use for temporary caching of data that's easily regenerated. What a fascinating website (in general - other articles are worth reading too.) The author is Estonian; the website name (and his name) 'õunapuu' means 'apple tree'. I love Estonian names: often closely tied to nature. The way the story lead with the belief that the drives were likely going to be untrustworthy made me think the author was going to throw them in a system with multiple redundancies or use them as additional parity drives.. god speed! I was hoping for a full text dump of the SMART data from the drives. If CSPRNG encrypts /dev/urandom, encrypting the data using a binary 256 bit AES cmd to update entropy pool would double contain the data, which is writing /dev/random to /dev/nvme0n1p1. OT > Half of tech YouTube has been sponsored by companies like... It just struck me that the product reviews are a part of the social realm that is barely explored. Imagine a video website like TikTok or YouTube etc where all videos are organized under products. Priority to those who purchased the product and a category ranked by how many similar products you've purchased. The thing sort of exists currently in some hard to find corner of TEMU etc but there are no channels or playlists. The reason you don’t see videos arranged by product is because everyone knows not to trust unknown creators telling you how great a product is. Viewers want to see opinions from specific people they’ve come to trust, not the first video that comes up for a product. They don't have to tell you anything. Just unbox and show what they got. I just purchased a bicycle chain cleaning device. It was absurdly cheap. The plastic was extruded poorly, it was hard to assemble, it was not entirely obvious how to use it. However! It did the job and it barely got dirty. I expected it to be full of rusty oil both inside and outside but it accumulated just a tiny smudge on the inlet. If anyone made a video it would be a fantastic product. God, the flood of absolutely useless "review" videos Amazon has incentivized customers to shit all over their site which are nothing more than unboxings are the worst thing about that ecosystem. No thank you. Think of it like a football channel, a place to contain such things. Amazon is just not interested in organizing it properly. You should have a look what river of fresh nonsense is uploaded to YouTube. The difference is that amazon has you look at it as if something valuable. Alternatively, unknown creators have less incentive to falsely promote or lie. It’s the reason I tend to trust random strangers on Reddit than popular YouTubers who have achieved monetization and sponsorship. No, that’s the opposite of how it works. I’ve seen how PR firms interact with creators. It’s much easier to get the small time creators to take your product and make a positive video because getting some free product is the biggest payout they’re getting from their channel. They will always give positive reviews because they have more to gain from flattering the companies that send them free stuff than from the $1.50 they’re going to earn in ad money. The PR firms who worked with the company I was at had a long list of small time video creators who would reliably produce positive videos of something as long as you sent them a free product. The creators know this game. I don’t trust big channels especially, because I assume they have just sold themselves out to the biggest sponsor. Influencers only exist due to campaign deals, where companies try to sneak their ads into your mind by abusing your inclination to trust another human being. All of it is sickening. In comparison, I’d rather read a general review magazine with a long history. At least they don’t try to trick me into believing they are working out of the goodness of their hearts, and they usually aren’t married to a single big sponsor. Online reviews are broken beyond repair. Coincidentally or not, those folks who have more subscribers usually charge more for their consideration. That’s why I generally trust Steve of Gamers Nexus more than other folks, because they don’t do ads except for promoting their own products, so there’s no conflict of interest. On the one hand, Gamers Nexus doesn’t manufacture their own hard drives, but on the other, they publish their methodology and have a reputation to uphold, so I would trust their judgement regarding testing computer hardware more than folks who do engage in outside advertising. There's Kakaku.com[1] in Japanese Internet for all consumer electronics, Minkara for cars, bookmeter.com for books, and 5ch.net as fallback. It's surprising that there's only Goodreads on English Internet that everyone have heard of... 1: https://review.kakaku.com/review/K0001682323/ | https://review-kakaku-com.translate.goog/review/K0001682323/... These drives are very likely refurbs that are unofficial. White labeling avoids lawsuits. I never understood why they let Seagate et al do this game about hard drives. If they offer a warranty, then replace the drive to brand new, and shove the recertified, fixed whatever bullshit up your wahzoo.
userbinator - 21 hours ago
justinclift - 10 hours ago
userbinator - an hour ago
justsomehnguy - an hour ago
mmaunder - 18 hours ago
ranma42 - 16 hours ago
userbinator - an hour ago
justsomehnguy - an hour ago
matt-p - 13 hours ago
CaptainOfCoit - 7 hours ago
justsomehnguy - 43 minutes ago
7bit - 9 hours ago
hddherman - a day ago
justinclift - 10 hours ago
leobg - a day ago
beala - a day ago
bakugo - 17 hours ago
riobard - 8 hours ago
mannyv - a day ago
Yokolos - a day ago
zettabomb - a day ago
beala - a day ago
jcalvinowens - a day ago
cerved - a day ago
cerved - a day ago
reliablereason - 10 hours ago
dapperdrake - 6 hours ago
hddherman - 9 hours ago
tombert - a day ago
dontlaugh - 7 hours ago
tombert - 6 hours ago
tw04 - a day ago
amelius - a day ago
beagle3 - a day ago
phil21 - a day ago
IgorPartola - a day ago
nicman23 - 8 hours ago
dapperdrake - 21 hours ago
hypercube33 - a day ago
rovr138 - a day ago
dheera - a day ago
VTimofeenko - a day ago
aftbit - a day ago
scottlamb - a day ago
master_crab - a day ago
scottlamb - 6 hours ago
master_crab - 4 hours ago
_kb - 12 hours ago
aftbit - 2 hours ago
toast0 - an hour ago
dragontamer - a day ago
asmor - 18 hours ago
hypercube33 - a day ago
adrian_b - 14 hours ago
cerved - a day ago
hypercube33 - a day ago
toast0 - a day ago
asmor - 18 hours ago
throawayonthe - 10 hours ago
willis936 - a day ago
serf - a day ago
willis936 - 21 hours ago
behringer - 20 hours ago
sigio - 10 hours ago
speedgoose - a day ago
Aurornis - a day ago
oceanplexian - 19 hours ago
smartbit - 17 hours ago
dleeftink - a day ago
EvanAnderson - a day ago
cm2187 - a day ago
userbinator - 21 hours ago
cm2187 - 14 hours ago
userbinator - an hour ago
cm2187 - 32 minutes ago
LorenPechtel - a day ago
dragontamer - a day ago
lproven - a day ago
dragontamer - 20 hours ago
lproven - 10 hours ago
f_devd - a day ago
zozbot234 - a day ago
lproven - 10 hours ago
f_devd - 5 hours ago
cerved - a day ago
f_devd - 13 hours ago
stirlo - a day ago
arjie - 20 hours ago
patrakov - a day ago
gambiting - a day ago
jabart - a day ago
malfist - a day ago
jabart - a day ago
malfist - a day ago
Spooky23 - a day ago
kvemkon - a day ago
kklimonda - a day ago
deodar - a day ago
compsciphd - 9 hours ago
t312227 - 10 hours ago
hexagonwin - a day ago
ndiddy - 19 hours ago
userbinator - 21 hours ago
vintagedave - 12 hours ago
serf - a day ago
walrus01 - a day ago
awaymazdacx5 - a day ago
econ - a day ago
Aurornis - a day ago
econ - a day ago
ghostly_s - a day ago
econ - 11 hours ago
markerz - a day ago
Aurornis - 36 minutes ago
9dev - a day ago
aspenmayer - a day ago
numpad0 - a day ago
buckle8017 - a day ago
lofaszvanitt - a day ago