Improving storage efficiency in Magic Pocket, Dropbox's immutable blob store
dropbox.tech48 points by laluser 6 days ago
48 points by laluser 6 days ago
> Last year, we rolled out a new service that changed how data is placed across Magic Pocket. The change reduced write amplification for background writes, so each write triggered fewer backend storage operations. But it also had an unintended side effect: fragmentation increased, pushing storage overhead higher. Most of that growth came from a small number of severely under-filled volumes that consumed a disproportionate share of raw capacity
Me thinking big corps with huge infrastructure bills meticulously model changes like that using the production data they have, so that exact change in all the metrics they care about is known upfront. Turned out they are like me: deploy and see what breaks.
Does Amazon ever publish similar articles about S3?
Google recently increased storage from 2 TB to 5 TB on their $20 AI plan, while Dropbox is still stuck at 2 or 3 TB for their $12/$20 plans.
They moved from 1 TB to 2 TB in mid-2019, and I wonder if they ever plan to pass on any of the gains from the past seven years of technological advancements, or if those gains are simply being captured on their side while we keep paying the same.
are these "technological advancements" in storage in the room with us right now? because I'm looking at today's price per TB and it's higher than it was in 2020
The immutability of extents is dictated by their SMR hardware, I believe.
I don't know the full picture behind their decision-making but immutability is much easier to reason about in a distributed system, in general.
[dead]
All this talk about a tool that isn’t open source?