DEC64: Decimal Floating Point (2020)
crockford.com50 points by vinhnx 9 days ago
50 points by vinhnx 9 days ago
What's the point of saying that it is "very well suited to all applications that are concerned with money" and then write 3.6028797018963967E+143, which is obviously missing a few gigamultiplujillion.
Previously:
https://news.ycombinator.com/item?id=7365812 (2014, 187 comments)
https://news.ycombinator.com/item?id=10243011 (2015, 56 comments)
https://news.ycombinator.com/item?id=16513717 (2018, 78 comments)
https://news.ycombinator.com/item?id=20251750 (2019, 37 comments)
Also my past commentary about DEC64:
> Most strikingly DEC64 doesn't do normalization, so comparison will be a nightmare (as you have to normalize in order to compare!). He tried to special-case integer-only arguments, which hides the fact that non-integer cases are much, much slower thanks to added branches and complexity. If DEC64 were going to be "the only number type" in future languages, it had to be much better than this.
Wtf is up with the clown car that is floating point standards?
IEEE 754 is a floating point standard. It has a few warts that would be nice to fix if we had tabula rasa, but on the whole is one of the most successful standards anywhere. It defines a set of binary and decimal types and operations that make defensible engineering tradeoffs and are used across all sorts of software and hardware with great effect. In the places where better choices might be made knowing what we know today, there are historical reasons why different choices were made in the past.
DEC64 is just some bullshit one dude made up, and has nothing to do with “floating-point standards.”
A goid friend of mine worked on decimal floating point for IBM Power chips (I think it was Power 7 which had hardware support).
Anyway, he insisted on calling it just "Decimal Floating". Because there was "no point".
There are a bunch of different encodings for decimal floating-point. I fail to see how this is the standard that all languages are converging to.
IEEE754 normalizes two encodings, BID and DPD, for decimal32, decimal64 and decimal128 precision presets. This is neither of those.
Many libraries use an approach with a simple significand + exponent, similar to the article, but the representation is not standardized, some use full integral types for this rather than specific bits (C# uses 96+32, Python uses a tuple of arbitrary integers). It's essentially closer to fixed-point but with a variable exponent.
The representation from the article is definitely a fairly good compromise though, specifically if you're dealing with mostly-fixed-point data.
This seems optimized for fast integer operations. Except that if I only cared about integers, I'd use an actual integer type.
Near the end of the article, under Motivation:
> The BASIC language eliminated much of the complexity of FORTRAN by having a single number type. This simplified the programming model and avoided a class of errors caused by selection of the wrong type. The efficiencies that could have gained from having numerous number types proved to be insignificant.
DEC64 was specifically designed to be the only number type a language uses (not saying I agree, just explaining the rationale).
Well I suppose it might be preferable if javascript used this type...
JavaScript engines do optimize integers. They usually represent integers up to +-2^30 as integers and apply integer operations to them. But of course that's not observable.
If you want the job done properly, this already exists: https://en.wikipedia.org/wiki/Decimal128_floating-point_form...
I was expecting something about floating point formats used by some DEC PDP series computer...
Yes, DEC did use a non-IEEE floating point format at least through the VAX-11 era. I was fooled by the title too.
Atari 8-bit basic used something pretty similar to this [1], except it did have normalization. It only had 10 BCD digits (5 bytes) and 2 digits (1 byte) for exponent, so more of a DEC48 but still… That was a loooong time ago…
It was slightly more logical to use BCD on the 6502 because it had a BCD maths mode [2], so primitive machine opcodes (ADC, SBC) could understand BCD and preserve carry, zero etc flags
[1]: https://www.atarimax.com/freenet/freenet_material/5.8-BitCom...
If they reversed the order of the fields, you could sort them by just pretending they are int64’s.
No you couldn’t, given “Normalization is not required”, so we have
10³ × 9 < 10¹ × 100000The memory savings from 32 bit or even 16 bit floats are definitely not pointless! Not to mention doubling simd throughput. Speaking of which, without simd support this certainly can't be used in a lot of applications. Definitely makes sense for financial calculations though.
Can't wait to have it in my language!