An overengineered solution to `sort | uniq -c` with 25x throughput (hist)

github.com

89 points by noamteyssier 5 days ago


zX41ZdbW - 7 hours ago

This and similar tasks can be solved efficiently with clickhouse-local [1]. Example:

    ch --input-format LineAsString --query "SELECT line, count() AS c GROUP BY line ORDER BY c DESC" < data.txt
I've tested it and it is faster than both sort and this Rust code:

    time LC_ALL=C sort data.txt | uniq -c | sort -rn > /dev/null
    32 sec.

    time hist data.txt > /dev/null
    14 sec.

    time ch --input-format LineAsString --query "SELECT line, count() AS c GROUP BY line ORDER BY c DESC" < data.txt > /dev/null
    2.7 sec.
It is like a Swiss Army knife for data processing: it can solve various tasks, such as joining data from multiple files and data sources, processing various binary and text formats, converting between them, and accessing external databases.

[1] https://clickhouse.com/docs/operations/utilities/clickhouse-...

MontyCarloHall - 2 hours ago

>I use nucgen to generate a random 100M line FASTQ file and pipe it into different tools to compare their throughput with hyperfine.

This is a strange benchmark [0] -- here is what this random FASTQ looks like:

  $ nucgen -n 100000000 -l 20 | head -n8
  
  >seq.0
  TGGGGTAAATTGACAGTTGG
  >seq.1
  CTTCTGCTTATCGCCATGGC
  >seq.2
  AGCCATCGATTATATAGACA
  >seq.3
  ATACCCTAGGAGCTTGCGCA
There are going to be very few [*] repeated strings in this 100M line file, since each >seq.X will be unique and there are roughly a trillion random 4-letter (ACGT) strings of length 20. So this is really assessing the performance of how well a hashtable can deal with reallocating after being overloaded.

I did not have enough RAM to run a 100M line benchmark, but the following simple `awk` command performed ~15x faster on a 10M line benchmark (using the same hyperfine setup) versus the naïve `sort | uniq -c`, which isn't bad for something that comes standard with every *nix system.

  awk '{ x[$0]++ } END { for(y in x) { print y, x[y] }}' <file> | sort -k2,2nr
[0] https://github.com/noamteyssier/hist-rs/blob/main/justfile

[*] Birthday problem math says about 250, for 50M strings sampled from a pool of ~1T.

noamteyssier - 5 days ago

Was sitting around in meetings today and remembered an old shell script I had to count the number of unique lines in a file. Gave it a shot in rust and with a little bit of (over-engineering)™ I managed to get 25x throughput over the naive approach using coreutils as well as improve over some existing tools.

Some notes on the improvements:

1. using csv (serde) for writing leads to some big gains

2. arena allocation of incoming keys + storing references in the hashmap instead of storing owned values heavily reduced the number of allocations and improves cache efficiency (I'm guessing, I did not measure).

There are some regex functionalities and some table filtering built in as well.

happy hacking

donatj - 7 hours ago

I created "unic" a number of years ago because I had need to get the unique lines from a giant file without losing the order they initially appeared. It achieves this using a Cuckoo Filter so it's pretty dang quick about it, faster than sorting a large file in memory for sure.

https://github.com/donatj/unic

nasretdinov - 7 hours ago

Note that by default sort command has a pretty low memory usage and spills to disk. You can improve the throughput quite a bit by increasing the allowed memory usage: --buffer-size=SIZE

ashvardanian - 2 hours ago

Storage, strings, sorting, counting, bioinformatics... I got nerd-sniped! Can't resist a shameless plug here :)

Looking at the code, there are a few things I would consider optimizing. I'd start by trying (my) StringZilla for hashing and sorting.

HashBrown collections under the hood use aHash, which is an excellent hash function, but on both short and long inputs, on new CPUs, StringZilla seems faster [0]:

                               short               long
  aHash::hash_one         1.23 GiB/s         8.61 GiB/s
  stringzilla::hash       1.84 GiB/s        11.38 GiB/s

A similar story with sorting strings. Inner loops of arbitrary length string comparisons often dominate such workloads. Doing it in a more Radix-style fashion can 4x your performance [1]:

                                                    short                  long
  std::sort_unstable_by_key           ~54.35 M compares/s    57.70 M compares/s
  stringzilla::argsort_permutation   ~213.73 M compares/s    74.64 M compares/s
Bear in mind that "compares/s" is a made-up metric here; in reality, I'm comparing from the duration.

[0] https://github.com/ashvardanian/StringWars?tab=readme-ov-fil...

[1] https://github.com/ashvardanian/StringWars?tab=readme-ov-fil...

f311a - 5 hours ago

People often use sort | uniq when they don't want to load a bunch of lines into memory. That's why it's slow. It uses files and allocates very little memory by default. The pros? You can sort hundreds of gigabytes of data.

This Rust implementation uses hashmap, if you have a lot of unique values, you will need a lot of RAM.

Someone - 7 hours ago

> I am measuring the performance of equivalent cat <file> | sort | uniq -c | sort -n functionality.

It likely won’t matter much here, but invoking cat is unnecessary.

   sort <file> | uniq -c | sort -n
will do the job just fine. GNU’s sort also has a few flags controlling buffer size and parallelism. Those may matter more (see https://www.gnu.org/software/coreutils/manual/html_node/sort...)
trollbridge - 2 hours ago

This reminds me of a program I wrote to do the same thing that wc -L does, except a lot faster. I had to run it on a corpus of data that was many gigabytes (terabytes) in size, far too big to fit in RAM. MIT license.

https://github.com/JoshRodd/mll

jll29 - 5 hours ago

I use questions around this pipeline in interviews. As soon as people say they'd write a Python program to sort a file, they get rejected.

Arguably, this will result in a slower result in most cases, but the reason for the rejection is wasting developer time (not to mention time to test for correctness) to re-develop something that is already available in the OS.

majke - 6 hours ago

I thought my mmuniq holds the crown!

https://blog.cloudflare.com/when-bloom-filters-dont-bloom/

https://github.com/majek/mmuniq

noctune - 7 hours ago

I built something similarly a few years ago for `sort | uniq -d` using sketches. The downside is you need two passes, but still it's overall faster than sorting: https://github.com/mpdn/sketch-duplicates

G_o_D - 3 hours ago

why no mention of awk ? awk '!a[$0]++'

ukuina - 6 hours ago

Neat!

Are there any tools that tolerate slight mismatches across lines while combining them (e.g., a timestamp, or only one text word changing)?

I attempted this with a vector DB, but the embeddings calculation for millions of lines is prohibitive, especially on CPU.

scaredginger - 6 hours ago

Looks like the impl uses a HashMap. I'd be curious about how a trie or some other specialized string data structure would compare here.

fsiefken - 5 hours ago

I'm curious how much faster this is compared to the rust uutils coreutils ports of sort and uniq

vlovich123 - 7 hours ago

Why does this test against sort | uniq | sort? It’s kind of weird to sort twice no?

- 5 hours ago
[deleted]
flowerthoughts - 8 hours ago

The win here might be using HashMap to avoid having to sort all entries. Then sorting at the end instead. What's the ratio of duplicates in the benchmark input?

There is no text encoding processing, so this only works for single byte encodings. That probably speeds it up a little bit.

Depending on the size of the benchmark input, sort(1) may have done disk-based sorting. What's the size of the benchmark input?