The first example is incomplete: it forgets to clear and then check errno and/or check the returned value to +-HUGE_VAL! That'd probably slow down the strtod version a bit more.
Performance in the opposite direction is also useful;
the Ryū algorithm[0] is a fast float-to-string conversion that I've used recently to great effect[1]. Namely, to convert path information from glyphs into vector graphics for a minimal math-oriented Java-based TeX implementation[2]. The result is code that converts 1,000 simple formulas as strings into individual SVG documents in under 600 milliseconds on newish hardware.
The hard cases are big exponents and large numbers of decimal digits. In general, it requires arbitrarily large integers (OK, OK, up to the maximum size of a double ~2^1023) and long division on said large integers. Denormalized is a pain for sure.
I'm trying to think about which use cases would really be bottlenecked by parsing floats and they basically fall into two categories:
1) You're writing a JSON/some other human-readable serialization library
2) You need to interact with some API that you can't change that sends floats over the wire as strings (a subset of which will fall into the case of (1)).
For something like currencies you'd need a custom parsing engine anyways since you'll typically represent monetary values as fixed-width integer multiples of the smallest unit (eg; cents or basis points) except for trading engines where proper accounting isn't required. In fact, a lookup table may even be faster if the valid values are bounded.
My guess is most of the things in (2) are going to be reading telemetry from embedded devices with firmware that simply won't be changed anymore and values are sent as text.
JSON doesn't use Javascript doubles. It uses a custom format that excludes NaNs and Infinities. It allows arbitrary precision values: "A number is a sequence of decimal digits with no superfluous leading zero. It may have a preceding minus sign (U+002D). It may have a fractional part prefixed by a decimal point (U+002E). It may have an exponent, prefixed by e (U+0065) or E (U+0045) and optionally + (U+002B) or – (U+002D). The digits are the code points U+0030 through U+0039. Numeric values that cannot be represented as sequences of digits (such as Infinity and NaN) are not permitted."—ECMA-404[1]
That means that not all JSON values can be represented as IEEE754 doubles (Javascript Numbers) and not all IEEE754 doubles can be represented as JSON values. It's a lossy conversion.
Except you don't send IEEE floats over FIX. The precision is typically bounded. Also, that's more for placing orders.. NASDAQ for instance uses ITCH which is a binary stream.
You're right that you'd be leaving performance on the floor if you used the new function in the hot loop, but it would still be quite handy for recorded FIX messages for compliance, auditing, risk tools, maybe for hedging, and maybe for execution tasks outside the hot loop -- that's a huge surface area.
As for exchanges, CME _only_ had FIX for latency-sensitive order entry until the iLink 3 migration began last year. CME is huge and there are others like it, even further behind. There is plenty of liquidity you're missing if you ignore non-binary protocols.
https://github.com/abseil/abseil-cpp/blob/master/absl/string...
ETA: I adapted the benchmark in the blog to use absl::from_chars and it's more than twice as fast as strtod.