A love letter to the CSV format (2024)

medialab.sciencespo.fr

107 points by jordigh 3 days ago


joz1-k - 3 days ago

Except that the "comma" was a poor choice for a separator, the CSV is just a plain text that can be trivially parsed from any language or platform. That's its biggest value. There is essentially no format, library, or platform lock-in. JSON comes close to this level of openness and ease, but YAML is already too complicated as a file format.

roland35 - 3 days ago

To people saying that "your boss can open it" being an benefit of csv, well I have a funny story!

Back in the early 2000s I designed and built a custom data collector for a air force project. It saved data at 100 Hz on an SD card. The project manager loved it! He could pop the SD card out or use the handy USB mass storage mode to grab the csv files.

The only problem... Why did the data cut off after about 10 minutes?? I couldn't see the actual data collected since it was secret, but I had no issue on my end, assuming there was space on the card and battery life was good.

Turns out, I learned he was using excel 2003 to open the csv file. There is a 65,536 row limit (does that number look familiar?). That took a while to figure out!!

hiAndrewQuinn - 3 days ago

I like CSV because its simplicity and ubiquity make it an easy Schelling point in the wide world of corporate communication. Even very non-technical people can, with some effort, figure out how to save a CSV from Excel, and figure out how to open a CSV with Notepad if absolutely necessary.

On the technical side libraries like pandas have undergone extreme selection pressure to be able to read in Excel's weird CSV choices without breaking. At that point we have the luxury of writing them out as "proper" CSV, or as a SQLite database, or as whatever else we care about. It's just a reasonable crossing-over point.

ayhanfuat - 3 days ago

Previously: A love letter to the CSV format (https://github.com/medialab/xan/blob/master/docs/LOVE_LETTER...)

708 points | 5 months ago | 698 comments (https://news.ycombinator.com/item?id=43484382)

untrimmed - 3 days ago

This is a great defense, but I feel like it misses the single biggest reason CSV will never die: your boss can open it. We can talk about streaming and Parquet all day, but if the marketing team can't double-click the file, it's useless.

guzik - 3 days ago

I am glad that we decided to pick CSV as our default format for health data (even for heavy stuff like raw ECG). Yeah, files were bigger, but clients loved that they could just download them, open in Excel, make a quick chart. Meanwhile other software was insisting on EDF (lighter, sure) but not everything could handle it.

mcdonje - 3 days ago

>Excel hates CSV. It clearly means CSV must be doing something right.

Use tabs as a delimiter and excel interops with the format as if natively.

efitz - 3 days ago

I don’t think I ever heard anyone say “csv is dead”.

Smart people (that have been burned once too many times) put quotes around fields in csv if they aren’t 100% positive the field will be comma-free, and escape quotes in such fields.

femto - 3 days ago

CSV is good for debugging C/C++ real-time signal processing data paths.

Add cout or printf lines, which on each iteration print out relevant intermediate values separated by commas, with the first cell being a constant tag. Provided you don't overdo it, the software will typically still run in real-time. Pipe stdout to a file.

After the fact, you can then use grep to filter tags to select which intermediate results you want to analyse. This filtered data can be loaded into a spreadsheet, or read into a higher level script for analysis/debugging/plotting/... In this way you can reproducibly visualise internal operation over a long period of time and see infrequent or subtle deviations from expected behaviour.

lan321 - 3 days ago

I hate parsing CSV. There are so many different implementations it's a constant cat and mouse.. Literally any symbol can be the separator, then the ordering starts getting changed, then since you have to guess what's where you go by type but strings, for example, are sometimes in quotations, other times not, then you have some decimal split with a comma when the values are also separated with commas so you have to track what's a split and what's a decimal comma.. Then you get some line with only 2 elements when you expect 7 and have no clue what to do because there's no documentation for the output and hence what that line means..

If the CSV is not written by me it's always been an exercise in making things as difficult as possible. It might be a tad smaller as a format but I find the parsing to be so ass you need really good reason to use it.

Edit: Oh yeah, and some have a header, others don't and CSV seems to always come from some machine where the techs can come over to do an update, and just reorder everything because fuck your parsing and then you either get lucky and the parser dies, or since you don't really have much info the types just align and you start saving garbage data to your database until a domain expert notices something isn't quite right so you have to find when was the last time someone touched the machines and rollback/reparse everything..

1vuio0pswjnm7 - 3 days ago

Certainly I love CSV, too and agree with most of the reasons put forth in the blog post

But I tried the group's "CSV magician" program and was not impressed

https://github.com/medialab/xan

The table output seems very similar to sqlite3

It's a 20MB musl dynamically-linked executable (cf. 1.7MB statically-linked sqlite3 executable)

Most of the "suite" of subcommands seemed to be aimed at accomplishing the same stuff I can do with sqlite3, and, if necessary, flex

There seemed no obvious way too disable color

The last straw was it messed up the console

First freezing it and the pid was not visible with ps so I had to kill the shell

Then leaving me with no up/down or tab action

Yes I can fix this but I should not have to

I really want to keep an open mind and to believe in these rust console programs

But every time I try one it is a huge executable, an assault of superfluous color and generally no functionality that cannot be achieved with existing non-rust software

Even assuming I could get used to the annoying design, I like software that is reliable

Compared to the software I normally use, I cannot say these rust programs are equally as reliable

I also like software where I can easily edit the source to change what I do not like

These rust programs would require more resources and time to compile, messing around with a package manager and heaps of dependencies, plus a new language, with no clear benefit... b/c TBH I am not losing any sleep worrying that the small text processing programs I use are written in C; after all, the operating systems I use are written in C and that is not changing anytime soon

jcattle - 3 days ago

If you don't care that much about the accuracy of your data (like only caring about a few decimals of accuracy in your floats), you don't generate huge amounts of data, you do not need to work with it across different tools and pass it back and forth, then yes CSV CAN be nice.

I wouldn't write it a love letter though. There's a reason that parquet exists.

vim-guru - 3 days ago

Excel hates CSV

It clearly means CSV must be doing something right.

klinch - 3 days ago

Hot take: I prefer xlsx over CSV

I used to work on payment infrastructure and whenever a vendor offered us the choice between CSV and some other format we always opted for that other format (often xlsx). This sounds a bit weird, but when using xlsx and a good library for handling it, you never have to worry about encoding, escaping and number formatting.

This is one of these things that sound absolutely wrong from an engineering standpoint (xlsx is abhorrently complex on the inside), but works robustly in practice.

Slightly related: This was a German company, with EU and US payment providers. Also note that Microsoft Excel (and therefore a lot of other tools) produces "semicolon-separated values" files when started on a computer with the locale set to German...

xbmcuser - 3 days ago

I did not care for CSV format much till I started using them with llm and python scripts.

eviks - 3 days ago

> Okay, it's a lie,

Indeed, a lie only a lover would believe,,,

- 3 days ago
[deleted]
IanCal - 3 days ago

Counterpoint - CSV is absolutely atrocious and should be cast into the Sun.

It's unkillable, like many eldritch horrors.

> The specification of CSV holds in its title: "comma separated values". Okay, it's a lie, but still, the specification holds in a tweet and can be explained to anybody in seconds: commas separate values, new lines separate rows. Now quote values containing commas and line breaks, double your quotes, and that's it. This is so simple you might even invent it yourself without knowing it already exists while learning how to program.

Except that's just one way people do it. It's not universal and so you cannot take arbitrary CSV files in and parse them like this. You can't take a CSV file constructed like this and pass it into any CSV accepting program - many will totally break.

> Of course it does not mean you should not use a dedicated CSV parser/writer because you will mess something up.

Yes, implementers often have.

> No one owns CSV. It has no real specification

Yep. So all these monstrosities in the real world are all... maybe valid? Lots of totally broken CSV files can be parsed as CSV but the result is wrong. Sometimes subtly.

> This means, by extension, that it can both be read and edited by humans directly, somehow.

One of the very common ways they get completely fucked up, yes. Someone goes and sorts some rows and boom broken, often unrecoverable data loss. Someone doesn't correctly add or remove a comma. Someone mixes two files that actually have differently encoded text.

> CSV can be read row by row very easily without requiring more memory than what is needed to fit a single row.

CSV must be parsed row by row.

> By comparison, column-oriented data formats such as parquet are not able to stream files row by row without requiring you to jump here and there in the file or to buffer the memory cleverly so you don't tank read performance.

Sort of? Yes if you're building your own parser but who is doing that? It's also not hard with things like parquet.

> But of course, CSV is terrible if you are only interested in specific columns because you will indeed need to read all of a row only to access the part you are interested in.

Or if you're interested in a specific row, because you're going to have to be careful about parsing out every row until you get there.

CSV does not have a row separator. Or rather it does but it also lets you have that row separator appear and not mean "separate these rows" so you can't simply trust it.

> But critics of CSV coming from this set of pratices tend to only care about use-cases where everything is expected to fit into memory.

Parquet uses row groups which means you can stream chunks easily, those chunks contain metadata so you can easily filter rows you don't need too.

I much more often need to keep the whole thing in memory working with CSV than parquet. With parquet I don't even need to be able to fit all the rows on disk I can read the chunk I want remotely.

> CSV can be appended to

Yeah that's easier. Row groups means you can still do this though, but granted it's not as easy. *However* I will point out that absolutely nothing stops someone completely borking things by appending something that's not exactly the right format.

> CSV is dynamically typed

Not really. Everything is strings. You can do that with anything else if you want to. JSON can have numbers of any size if you just store them as strings.

> CSV is succinct

Yes, more so than jsonl, but not really more than (you guessed it) parquet. Also it's horrific for compression.

> Reverse CSV is still valid CSV

Get a file format that doesn't absolutely suck and you can parse things in reverse if you want. More usefully you can parse just sections you actually care about!

> Excel hates CSV

Helpfully this just means that the most popular way of working with tabular data in the world doesn't play that nicely with it.