Why I stopped using JSON for my APIs

aloisdeniel.com

165 points by barremian a day ago


dfabulich - a day ago

> With JSON, you often send ambiguous or non-guaranteed data. You may encounter a missing field, an incorrect type, a typo in a key, or simply an undocumented structure. With Protobuf, that’s impossible. Everything starts with a .proto file that defines the structure of messages precisely.

This deeply misunderstands the philosophy of Protobuf. proto3 doesn't even support required fields. https://protobuf.dev/best-practices/dos-donts/

> Never add a required field, instead add `// required` to document the API contract. Required fields are considered harmful by so many they were removed from proto3 completely.

Protobuf clients need to be written defensively, just like JSON API clients.

pyrolistical - a day ago

Compressed JSON is good enough and requires less human communication initially.

Sure it will blow up in your face when a field goes missing or value changes type.

People who advocate paying the higher cost ahead of time to perfectly type the entire data structure AND propose a process to do perform version updates to sync client/server are going to lose most of the time.

The zero cost of starting with JSON is too compelling even if it has a higher total cost due to production bugs later on.

When judging which alternative will succeed, lower perceived human cost beats lower machine cost every time.

This is why JSON is never going away, until it gets replaced with something with even lower human communication cost.

pzmarzly - a day ago

> With Protobuf, that’s impossible.

Unless your servers and clients push at different time, thus are compiled with different versions of your specs, then many safety bets are off.

There are ways to be mostly safe (never reuse IDs, use unknown-field-friendly copying methods, etc.), but distributed systems are distributed systems, and protobuf isn't a silver bullet that can solve all problems on author's list.

On the upside, it seems like protobuf3 fixed a lot of stuff I used to hate about protobuf2. Issues like:

> if the field is not a message, it has two states:

> - ...

> - the field is set to the default (zero) value. It will not be serialized to the wire. In fact, you cannot determine whether the default (zero) value was set or parsed from the wire or not provided at all

are now gone if you stick to using protobuf3 + `message` keyword. That's really cool.

davedx - 20 hours ago

"Ultra-efficient"

Searched the article, no mention of gzip, and how most of the time all that text data (html, js and css too!) you're sending over the wire will be automatically compressed to...... an efficient binary format!

So really, the author should compare protobufs to gzipped JSON

codewritero - a day ago

I love to see people advocating for better protocols and standards but seeing the title I expected the author to present something which would be better in the sense of supporting the same or more use cases with better efficiency and/or ergonomics and I don't think that protobuf does that.

Protobuf has advantages, but is missing support for a tons of use cases where JSON thrives due to the strict schema requirement.

A much stronger argument could be made for CBOR as a replacement for JSON for most use cases. CBOR has the same schema flexibility as JSON but has a more concise encoding.

wallaconno - 3 hours ago

My criticism is that protobuf is a premature optimization for most projects. I want to love protobuf, but it usually slows down my development pace and it’s just not worth it for most web / small data projects.

Distributing the contract with lock-step deploys between services is a bit much. JSON parsing time and size are not usually key factors in my projects. Protobuf doesn’t pose strict requirements for data validation, so I have to do that anyway. Losing data readability in transit is a huge problem.

Protobuf seems like it would be amazing in situations where data is illegible and tightly coupled such as embedded CAN or radio.

brabel - a day ago

Mandatory comment about ASN.1, a protocol from 1984, already did what Protobuf does, with more flexibility. Yes, it's a bit ugly but if you stick to the DER encoding it's really not worse than Protbuf at all. Check out the Wikipedia example:

https://en.wikipedia.org/wiki/ASN.1#Example_encoded_in_DER

Protobuf is ok but if you actually look at how the serializers work, it's just too complex for what it achieves.