4x faster network file sync with rclone (vs rsync) (2025)

jeffgeerling.com

294 points by indigodaddy 4 days ago


digiown - 17 hours ago

Note there is no intrinsic reason running multiple streams should be faster than one [EDIT: "at this scale"]. It almost always indicates some bottleneck in the application or TCP tuning. (Though, very fast links can overwhelm slow hardware, and ISPs might do some traffic shaping too, but this doesn't apply to local links).

SSH was never really meant to be a high performance data transfer tool, and it shows. For example, it has a hardcoded maximum receive buffer of 2MiB (separate from the TCP one), which drastically limits transfer speed over high BDP links (even a fast local link, like the 10gbps one the author has). The encryption can also be a bottleneck. hpn-ssh [1] aims to solve this issue but I'm not so sure about running an ssh fork on important systems.

1. https://github.com/rapier1/hpn-ssh

ericpauley - 17 hours ago

Rclone is a fantastic tool, but my favorite part of it is actually the underyling FS library. I've started baking Rclone FS into internal Go tooling and now everything transparently supports reading/writing to either local or remote storage. Really great for being able to test data analysis code locally and then running as batch jobs elsewhere.

coreylane - 17 hours ago

RClone has been so useful over the years I built a fully managed service on top of it specifically for moving data between cloud storage providers: https://dataraven.io/

My goal is to smooth out some of the operational rough edges I've seen companies deal with when using the tool:

  - Team workspaces with role-based access control
  - Event notifications & webhooks – Alerts on transfer failure or resource changes via Slack, Teams, Discord, etc.
  - Centralized log storage
  - Vault integrations – Connect 1Password, Doppler, or Infisical for zero-knowledge credential handling (no more plain text files with credentials)
  - 10 Gbps connected infrastructure (Pro tier) – High-throughput Linux systems for large transfers
edvardsire - 12 hours ago

Interesting that nobody has mentioned: Warp speed Data Transfer (WDT)[1].

From the readme:

- Warp speed Data Transfer (WDT) is an embeddedable library (and command line tool) aiming to transfer data between 2 systems as fast as possible over multiple TCP paths.

- Goal: Lowest possible total transfer time - to be only hardware limited (disc or network bandwidth not latency) and as efficient as possible (low CPU/memory/resources utilization)

1. https://github.com/facebook/wdt

newsoftheday - 16 hours ago

I prefer rsync because of its delta transfer which doesn't resend files already on the destination, saving bandwidth. This combined with rsync's ability to work over ssh lets me sync anywhere rsync runs, including the cloud. It may not be faster than rclone but it is more conserving on bandwidth.

cachius - 17 hours ago

rclone --multi-thread-streams allows transfers in parallel, like robocopy /MT

You can also run multiple instances of rsync, the problem seems how to efficiently divide the set of files.

SloopJon - 11 hours ago

The article links to a YouTube mini-review of USB enclosures from UGreen and Acasis, neither of which he loves.[1] I've been happy with the OWC 1M2 as a boot drive on a Mac Studio with Thunderbolt 5 ports.[2] I just noticed that there is an OWC 1M2 80G, based on USB4 v2.[3] I didn't know that was a thing, but I guess it's the USB cousin to Thunderbolt 5.

[1] https://www.youtube.com/watch?v=gaV-O6NPWrI

[2] https://eshop.macsales.com/shop/owc-express-1m2

[3] https://eshop.macsales.com/item/OWC/US4V2EXP1M2/

ftchd - 13 hours ago

Rclone is such an elegant piece of software, reminds me of the time where most software worked well most of the time. There's few people that wouldn't benefit from it, either as a developer or end-user.

I'm currently working on the GUI if you're interested: https://github.com/rclone-ui/rclone-ui

weddingbell - 7 hours ago

Yesterday I set up Rclone, and the download speed from Google Drive was so slow that caching took a long time. After applying the method described in this article, the speed improved.

This is my mount configuration. What do you think? Is there anything that might be causing issues??

rclone mount google_drive: X: ^

  --vfs-cache-mode full ^
  --vfs-cache-max-age 24h ^
  --vfs-cache-max-size 50G ^
  --vfs-read-ahead 1G ^
  --cache-dir "./rclone_cache" ^
  --vfs-read-chunk-size 128M ^
  --vfs-read-chunk-size-limit off ^
  --buffer-size 128M ^
  --dir-cache-time 1000h ^
  --drive-chunk-size 64M ^
  --poll-interval 15s ^
  --vfs-cache-poll-interval 1m ^
  --multi-thread-streams 32 ^
  --drive-skip-shortcuts ^
  --drive-acknowledge-abuse ^
  --network-mode
indigodaddy - 17 hours ago

One thing that sets rsync apart perhaps is the handling of hard links when you don't want to send both/duplicated files to the destination? Not sure if rclone can do that.

sigmonsays - 3 hours ago

for fun try this 1. tar + nc 2. rsync 3. scp

#1 is the fastest way to send because it keeps the buffers full in a consistent stream which allows tcp windows to grow as large as possible. 2 and 3 ( scp and rsync ) do round trip acks to the remote side which drastically slows things down, even if done in parallel.

xoa - 16 hours ago

Thanks for sharing, hadn't seen it but at almost the same time he made that post I too was struggling to get decent NAS<>NAS transfer speeds with rsync. I should have thought to play more with rclone! I ended up using iSCSI but that is a lot more trouble.

>In fact, some compression modes would actually slow things down as my energy-efficient NAS is running on some slower Arm cores

Depending on the number/type of devices in the setup and usage patterns, it can be effective sometimes to have a single more powerful router and then use it directly as a hop for security or compression (or both) to a set of lower power devices. Like, I know it's not E2EE the same way to send unencrypted data to one OPNsense router, Wireguard (or Nebula or whatever tunnel you prefer) to another over the internet, and then from there to a NAS. But if the NAS is in the same physically secure rack directly attached by hardline to the router (or via isolated switch), I don't think in practice it's significantly enough less secure at the private service level to matter. If the router is a pretty important lynchpin anyone, it can be favorable to lean more heavily on that so one can go cheaper and lower power elsewhere. Not that more efficiency, hardware acceleration etc are at all bad, and conversely sometimes might make sense to have a powerful NAS/other servers and a low power router, but there are good degrees of freedom there. Handier then ever in the current crazy times where sometimes hardware that was formerly easily and cheaply available is now a king's ransom or gone and one has to improvise.

kwanbix - 14 hours ago

It is crazy to see how difficult google makes it for anyone to download their own pictures from google photos. Rclone used to allow you to download them, but not anymore. Only the ones uploaded by Rclone are available to download. I wish someone forced all cloud providers to allow you to download your own data. And no, google takout doesn't count. It is horrible to use.

Dunedan - 15 hours ago

I wonder if the at least partially the reason for the speed up isn't the multi-threading, but instead that rclone maybe doesn't compress transferred data by default. That's what rsync does when using SSH, so for already compressed data (like videos for example) disabling SSH compression when invoking rsync speeds it up significantly:

  rsync -e "ssh -o Compression=no" ...
aidenn0 - 15 hours ago

rclone is not as good as rsync for doing ad-hoc transfers; for anything not using the filesystem, you need to set up a configuration, which adds friction. It realy is purpose built for recurring transfers rather than "I need to move X to Y just once"

KolmogorovComp - 16 hours ago

Why are rclone/rsync never used by default for app updates? Especially games with large assets.

packetlost - 17 hours ago

I use tab-complete to navigate remote folder structures with rsync all the time, does rclone have that?

rurban - 14 hours ago

Thanks for the lms tips in the comments. Amazing!

cranberryturkey - 6 hours ago

The parallelism advantage of rclone is real but undersold here. rsync's single-stream design made sense when networks were the bottleneck. Now with high-bandwidth links (especially to cloud storage), the bottleneck is often the round-trip latency of per-file metadata operations.

rclone's multi-threaded transfers effectively pipeline those operations. It's the same principle as why HTTP/2 multiplexing was such a win — you stop paying the latency tax sequentially.

One thing I'd add: for local-to-local or LAN sync, rsync still often wins because the overhead of rclone's abstraction layer isn't worth it when latency is already sub-millisecond. The 4x speedup is really a story about high-latency, high-bandwidth paths where parallelism dominates.

hsbauauvhabzb - 7 hours ago

I love rclone. I use it in place of scp/sftp frequently. My biggest complaint is that it seems to require a config file, having the ability to `rclone copy root@192.168.1.1:/tmp/foo ./` would be a game changer.

- 16 hours ago
[deleted]
sneak - 15 hours ago

What’s sad to me is that rsync hasn’t been touched to fix these issues in what feels like decades.

tonymet - 11 hours ago

golang concurrent IO is so accessible that even trivial IO transform scripts (e.g. compression, base64, md5sum/cksum) are very easy to multicore.

You'd be astonished at how much faster even seemingly fast local IO can go when you unblock the IO

baal80spam - 17 hours ago

I'll keep saying that rclone is a fantastic and underrated piece of software.

gjvc - 14 hours ago

May 6, 2025 May 6, 2025 May 6, 2025 May 6, 2025 May 6, 2025 May 6, 2025 May 6, 2025