What you would need is the language to track the expected range of the numbers. You often end up with multiple different multiply/divide implementations (shifting amounts before/after) based on if you can safely guarantee you are within an expected range or not.
I doubt it. It fails for far too many useful programming situations that it would cause more problems than floating point.
Cannot use it efficiently for nearly anything: finance software, science software, engineering software, high quality graphics software... 3d software, pretty much anything that has any range needed or ability to lower errors while doing accumulation of information.
This is exactly why floating point was invented and standardized - fixed point is a failure for most any program, and only can work with much effort only for certain situations.
(I've written tons of fixed point code, numerical libraries across the spectrum from high performance, high quality, tunable quality, arbitrary precision libs, posits and unums, IEEE half-float software implementations, and more, so I do know what I'm talking about).
You might be surprised at where fixed point is already used:
* Finance software: if you're using floats, you're doing something horribly wrong. All balances are measured in integer multiples of some quantum; depending on the system, that may be cents, it may be 0.1 cents, or it may be 0.01 cents. Gradations finer than that simply do not exist, and integer overflow is both more likely to be noticed and more easily explained to be a bug than precision loss (and this matters when you have a regulator asking uncomfortable questions)
* CAD software: floating point may make sense for simulations, but at least for PCB design, layout is done in fixed point. You need consistent precision across the board, and avoiding edge cases in your geometry kernel from precision issues makes everything much easier. Besides, with a 1µm quantum, 32-bit numbers are sufficient for a board 4km on a side. If you need larger, I would love to see your fab.
* Robotics: maybe this one's just me, but expressing motion control algorithms in fixed point has saved me €1/part on more than a few occasions as a result of being able to use an MCU without hardware floating point. Compared to the €0.20/part saved by muntzing the rest of the circuit, the small amount of additional work was totally worth it.
Indeed, the tools for working with fixed point aren't great. C is a lost cause; the best you can do is name your variables things like velocity_12_4 and manually check that the precision lines up. Rust wasn't great when I tried, though const generics may have resulted in an improvement. C++ was, astonishingly enough, quite good; I made a header-only fixedpoint.h with a templated struct `fixed<type,size,precision>` and all inline operations. I get the impression that Ada would be even better, but I've yet to use it in anger.
The most common place is probably image/audio processing. Sample values are usually always quantized, even in intermediate stages. Decoding a JPEG for example is a bunch of fixed point math.
Professional tools even for audio use floating point engines (Ableton has done this for well over a decade [1]). While fixed point may be ok for input/output formats, it is certainly not suitable for complex audio processing needs, where errors accumulate far too fast for any fixed point system at equivalent bit depth.
And fixed point is an order of magnitude slower.
> Decoding a JPEG for example is a bunch of fixed point math
It can be done that way, but modern decoders use float for accuracy and speed when available. Search libjpeg-turbo [2] codebase for float and surprise yourself. Even the ancient libjpeg [3] uses float when possible.
Why not simply used fixed-point? Because it's worse in nearly every way, and an artifact of computing from the 1970s-1980s when JPEG pieces were designed (DCT, lots of work culminating in the final JPEG spec in 1992). When JPEG was designed, computers generally didn't have floating-point hardware. The world of computing has changed a bit since then.
> You might be surprised at where fixed point is already used
Not really, I have worked on all those kinds of things professionally, in fixed point, floating point, and arbitrary precision where appropriate. You apparently have not.
> Finance software: if you're using floats, you're doing something horribly wrong. All balances are measured in integer multiples of some quantum; depending on the system, that may be cents, it may be 0.1 cents, or it may be 0.01 cents. Gradations finer than that simply do not exist,
I take it you have never actually worked in finance; I have. This is so ludicrously wrong that this is clear.
Your claim might work for a free app that only handles your checkbook. Actual finance software has to do things like deal with multiple currencies, high precision compounded calculations, exchange rates that go to high precision and can handle trillions (4T+ a day goes through exchange markets), and on an on. To show you how ludicrous your claim it, let's do a simple experiment. I'll take a 64-bit double, you take 64 bit fixed point math, and let's compute compound interest (a tiny, tiny, trivial computation) tables. We'll make each input trivially simple (real code would handle vastly more cases). Here's your spec:
P = beginning principal in pennies, say loans from 0 to $100M (which is small, but I'll try to help your case, which will still fail...)
R = rate to 0.001, say mult by 1000 so it's an integer for you.
C = times compounded per year, 1-365 (again trivial, but this helps your cause)
Y = # of years to compound, 1-100 (again smaller than real code must handle, but it helps your case).
Your goal is to compute the amount the principal has grown to each compounding period, and at the end, return the amount rounded to the nearest penny (bankers rounding, or round up, or round down, whatever you can muster).
So let's see your function of form `long scaleFV(long P, long R, long C, long Y)` using fixed point.
An trivially simple version with doubles would look like
double FV2(long P, long R, long C, long Y)
{
long N = C * Y; //total number of compounding periods
double p = P;
double r = R/1000.0;
while (N-- > 0)
{
p += p * r / (100.0 * C); // most trivial thing one can do
}
return (long)round(p);
}
Compare both of these to the truth using big float or similar. Trying to do the above, for any trivial fixed point version, will fail over 80% of the time (uniformly sampled inputs from the range above). You likely will never find a case where the fixed point succeeds and the trivial double version fails.
Try it. Show your code that uses the claim you so confidently (and incorrectly) claim above.
And remember this is a trivial, tiny part of the real needs for finance codebases. Once you realize how badly you fail at this simple task where the trivial floating point version works, then please stop claiming things you know nothing about.
> and this matters when you have a regulator asking uncomfortable questions
Oh, I take it you have had this happen? Or did you make this up based on zero experience?
> but at least for PCB design, layout is done in fixed point
In a few systems, like KiCAD it is, not it's certainly not for high end professional systems. Try laying out an ASIC with fixed point, where you might have features at the sub-nm level up to several cm, and you need to track error bounds. And once you need to simulate just about anything (SPICE, etc..) all your fixed point fails.
(Note I use KiCAD a lot for PCB design myself for gadgets I sell; it's good for a few things, but not anywhere as good as a high end PCB design and analysis tool you'd want for high freq stuff and getting past FCC requirements for emissions, which I have also done).
> and avoiding edge cases in your geometry kernel from precision issues makes everything much easier
Yes, easier but larger error. There's a reason pro CAD kernels use floating point, not fixed point. Rotating things in fixed point anything other than 90 degree increments has larger errors by orders of magnitude, and soon your squares are not squares, and your intersections end up with all sorts of bad behavior (see some neat discussion on this in Matt Pharr's Oscar winning book PBRT).
And throughout all this, fixed-point is an order of magnitude slower when hardware float is available, such as places KiCAD runs.
> Robotics: maybe this one's just me, but expressing motion control algorithms in fixed point has saved me €1/part on more than a few occasions as a result of being able to use an MCU without hardware floating point. Compared to the €0.20/part saved by muntzing the rest of the circuit, the small amount of additional work was totally worth it.
It is just you. Yes, at some end-point controller you might use fixed point, but the robotics stuff I've done I often also need stuff like inverse kinematics for motion planning (I'd hate to solve Jacobian inverse stuff in fixed point!), done stuff like probabilistic SLAM flavors (also would diverge so badly as fixed point...), use all flavors of Kalman and other filtering to do sensor fusion, used KF and deep learning stuff to get better estimates from 9DOF mag/gyro/accel sensors, and on and on.
So yes, it may be just you, but I don't know anyone working professionally on robotics systems (and I know a few dozen, having worked with them a loooong time, writing, guess what, numerical code for and with them.).
> I made a header-only fixedpoint.h ...
Same, with also parameter for underlying type so I can plug in int32 where available (e.g., ESP32) int16 where it is not (many microchip chips), or even put in yet another custom type like an int64 emulation built on two int32s.... I've got templated versions to do naive mults and divs (which it seems most people do), one that has correct last bit (takes slightly more bit twiddling, but useful if you want a little more precision). And versions to do faster divs for chips that have software emulated divs (because for fixed point, since at the end you're going to rescale the answer, you can make your div much faster than a naive div followed by a shift). So yes, I have been down all this.
Remember, please show your fixed point code that handles the trivial financial code task above, or stop claiming things you apparently have not done and do not understand.
> When you say "finance" I assume you mean Wall Street type stuff?
Nope, pretty much any common financial software. Excel is most likely the most used software for doing finance of any type - it's floating point. Quicken is the most common personal finance - floating point. I'd guess the majority of personal finance programs do stuff like mortgages and credit card calculations, which will hit the same compound interest failures I listed above if you try to do it in fixed point, so they'd all be floating point.
If ALL you do is addition, then fixed point could work. But it fails at every other basic accounting task that it would be ludicrous to do any modern or even toy programs in it any more. Compound interest, mortgages, taxes, wealth planning - all floating point. Top 20 hits on github for finance - all use floating point for the math.
All the top hits for ERP on github - floating point. Top ERP commercial systems [1], list Microsoft Dynamics as #1. Looking at features it's most certainly floating point (it has calculated fields allowing arbitrary probabilities, for example). Second place on list is Syspro. Same thing - looking through their documentation they allow arbitrary spreadsheet like computations, even allowing machine learning stuff to be integrated for computations - this is certainly floating point. Third on list is QT9 ERP. Same result - they have modules called "Finance" that allow arbitrary calculations, which is certainly not done in fixed point. Multiple other modules look like they'd need floating point since fixed point would simply accumulate error too quickly (and be slow).
Alternatively if you have any of these programs, you could pull them apart pretty quickly with Ghidra and you'd likely find all floating point math in them.
While fixed-point has uses, it is not used nearly as much any more as people want to believe, and it's certainly a terrible idea for any finance beyond addition if simple numbers, i.e., a checkbook (which itself is getting outdated). If you cannot even do something as simple as compound interest, then I'd hesitate to call it finance software.
In the 1980s, floating point hardware was uncommon, there was no IEEE 754. The computing and finance has moved so far beyond the 1980s that I even doubt much modern greenfield accounting software is fixed point any more.
The databases use NUMERIC/DECIMAL data types almost exclusively. Once in a while you will find a floating point type but it's pretty rare.
The code they write is not generally available to the public so I'm not sure what they are doing inside their code. Some probably use java's BigDecimal, some probably wrote their own libraries to handle data types (e.g. SAP).
> If ALL you do is addition, then fixed point could work. But it fails at every other basic accounting task that it would be ludicrous to do any modern or even toy programs in it any more.
How about the basic task of storing 0.1 or 0.01, it seems pretty good at that and float (binary float) struggles.
Decimal float works, which is why IBM but the DFP units on the Power and Z cpu's.
Clearly you haven't looked at my CV. I spend about half my day reading this country or that one's regulations to make sure that the small bit of the worlds largest CSD (by at least one measure) that I work on follows the law, and I have the insatiable curiosity to read the rest of it as light bedtime reading. I know whereof I speak regarding financial systems. You don't.
Specifically, accounting law is old. Computers were used to automate what the bookkeepers did by hand, which was emphatically not floating point. At the end of an interest period, interest was computed and booked in rounded form. The next period, interest would be computed based on the number in the books, not the "floating point" number that was rounded from the last time interest was booked. More clearly: put 1c into a bank account earning 1% interest compounded daily. After 1000 years, that account will have precisely 1c in it, because there was never enough interest to earn at least one quantum. This is the difference between math and bookkeeping, and between the sort of math that passes for correct in an ERP system versus the sort of math needed by a bank.
Also, fixed point is not slow, at least not on the systems relevant to actual financial infrastructure. It's all on IBM Z, which has hardware instructions for fixed point decimal calculation because if it didn't, it would run like treacle.
Is there even any performance benefit on modern CPUs? I tried to consult some real tables but I'm not experienced enough to be sure I'm reading them correctly. If I'm reading something like [1] properly, it looks like it is not a clear win on modern hardware to use fixed point & integer operations. It would depend on the ratio of addition/subtraction/multiplication to division.
(Obviously one must factor out a lot of local considerations, which modern CPUs are full unto overflowing with; I'm kind of looking at a very, very broad average performance question across code bases doing enough different math operations to average out, not whether one particular loop can run theoretically run faster or slower with one or the other.)
It depends a lot on CPU architecture. Floating point units may be tied to each core, or they may be shared, so it may further depend on other concurrent workloads.
There's also SIMD instructions. Modern CPUs have built-in instructions for handling multiple ints or floats as a vector. If you can get your fixed-point varies to for into 8-bit or 16-bit fields instead of 32, then the same sized vector units can handle more values per instruction.
My gut would be that you also get some benefit by some auxiliary choices. With a lot of precomputed constants, you can probably avoid a lot of known multiplications. That said, yeah, if you have to start doing a lot of fixed point multiplications, you could eat any savings you had.
Address calculations are a place where fixed point can still have an advantage -- it's often less latency and fewer ops to step a fixed-point accumulator and shift it into an address offset on the integer units than to step a floating point accumulator, convert it to integer, then move that over from the FP/vector units to the integer units.